00:00:00.002 Started by upstream project "autotest-per-patch" build number 124278 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.111 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.112 The recommended git tool is: git 00:00:00.112 using credential 00000000-0000-0000-0000-000000000002 00:00:00.113 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.150 Fetching changes from the remote Git repository 00:00:00.151 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.181 Using shallow fetch with depth 1 00:00:00.181 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.181 > git --version # timeout=10 00:00:00.210 > git --version # 'git version 2.39.2' 00:00:00.210 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.225 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.225 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.110 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.119 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.131 Checking out Revision 66b17c5c038844009fa2e0a881226613e4fd4f11 (FETCH_HEAD) 00:00:07.131 > git config core.sparsecheckout # timeout=10 00:00:07.141 > git read-tree -mu HEAD # timeout=10 00:00:07.156 > git checkout -f 66b17c5c038844009fa2e0a881226613e4fd4f11 # timeout=5 00:00:07.177 Commit message: "ipxe: Switch tests to use fedora39" 00:00:07.177 > git rev-list --no-walk 2d1b05126df47e5e0a48a6575a0601eb6e3ec2af # timeout=10 00:00:07.289 [Pipeline] Start of Pipeline 00:00:07.304 [Pipeline] library 00:00:07.306 Loading library shm_lib@master 00:00:07.753 Library shm_lib@master is cached. Copying from home. 00:00:07.787 [Pipeline] node 00:00:07.885 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.888 [Pipeline] { 00:00:07.909 [Pipeline] catchError 00:00:07.912 [Pipeline] { 00:00:07.931 [Pipeline] wrap 00:00:07.944 [Pipeline] { 00:00:07.955 [Pipeline] stage 00:00:07.957 [Pipeline] { (Prologue) 00:00:08.292 [Pipeline] sh 00:00:08.575 + logger -p user.info -t JENKINS-CI 00:00:08.591 [Pipeline] echo 00:00:08.592 Node: CYP9 00:00:08.598 [Pipeline] sh 00:00:08.906 [Pipeline] setCustomBuildProperty 00:00:08.921 [Pipeline] echo 00:00:08.922 Cleanup processes 00:00:08.927 [Pipeline] sh 00:00:09.213 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.213 764499 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.227 [Pipeline] sh 00:00:09.514 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.514 ++ grep -v 'sudo pgrep' 00:00:09.514 ++ awk '{print $1}' 00:00:09.514 + sudo kill -9 00:00:09.514 + true 00:00:09.527 [Pipeline] cleanWs 00:00:09.537 [WS-CLEANUP] Deleting project workspace... 00:00:09.537 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.544 [WS-CLEANUP] done 00:00:09.548 [Pipeline] setCustomBuildProperty 00:00:09.564 [Pipeline] sh 00:00:09.848 + sudo git config --global --replace-all safe.directory '*' 00:00:09.925 [Pipeline] nodesByLabel 00:00:09.927 Found a total of 2 nodes with the 'sorcerer' label 00:00:09.935 [Pipeline] httpRequest 00:00:09.939 HttpMethod: GET 00:00:09.940 URL: http://10.211.164.101/packages/jbp_66b17c5c038844009fa2e0a881226613e4fd4f11.tar.gz 00:00:09.943 Sending request to url: http://10.211.164.101/packages/jbp_66b17c5c038844009fa2e0a881226613e4fd4f11.tar.gz 00:00:09.963 Response Code: HTTP/1.1 200 OK 00:00:09.963 Success: Status code 200 is in the accepted range: 200,404 00:00:09.964 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_66b17c5c038844009fa2e0a881226613e4fd4f11.tar.gz 00:00:12.980 [Pipeline] sh 00:00:13.268 + tar --no-same-owner -xf jbp_66b17c5c038844009fa2e0a881226613e4fd4f11.tar.gz 00:00:13.287 [Pipeline] httpRequest 00:00:13.293 HttpMethod: GET 00:00:13.293 URL: http://10.211.164.101/packages/spdk_b16523e5e366c50a903b52e034b47bdc8bdad2b3.tar.gz 00:00:13.294 Sending request to url: http://10.211.164.101/packages/spdk_b16523e5e366c50a903b52e034b47bdc8bdad2b3.tar.gz 00:00:13.312 Response Code: HTTP/1.1 200 OK 00:00:13.313 Success: Status code 200 is in the accepted range: 200,404 00:00:13.313 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_b16523e5e366c50a903b52e034b47bdc8bdad2b3.tar.gz 00:00:53.218 [Pipeline] sh 00:00:53.507 + tar --no-same-owner -xf spdk_b16523e5e366c50a903b52e034b47bdc8bdad2b3.tar.gz 00:00:56.826 [Pipeline] sh 00:00:57.115 + git -C spdk log --oneline -n5 00:00:57.115 b16523e5e lib/ublk: add option to retain ublk device on exiting or recovery fails 00:00:57.115 e55c9a812 vbdev_error: decrement error_num atomically 00:00:57.115 f16e9f4d2 lib/event: framework_get_reactors supports getting pid and tid 00:00:57.115 2d610abe8 lib/env_dpdk: add spdk_get_tid function 00:00:57.115 f470a0dc6 event: do not call reactor events from spdk_thread context 00:00:57.129 [Pipeline] } 00:00:57.149 [Pipeline] // stage 00:00:57.158 [Pipeline] stage 00:00:57.160 [Pipeline] { (Prepare) 00:00:57.182 [Pipeline] writeFile 00:00:57.204 [Pipeline] sh 00:00:57.493 + logger -p user.info -t JENKINS-CI 00:00:57.509 [Pipeline] sh 00:00:57.797 + logger -p user.info -t JENKINS-CI 00:00:57.813 [Pipeline] sh 00:00:58.117 + cat autorun-spdk.conf 00:00:58.117 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.117 SPDK_TEST_NVMF=1 00:00:58.117 SPDK_TEST_NVME_CLI=1 00:00:58.117 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.117 SPDK_TEST_NVMF_NICS=e810 00:00:58.117 SPDK_TEST_VFIOUSER=1 00:00:58.117 SPDK_RUN_UBSAN=1 00:00:58.117 NET_TYPE=phy 00:00:58.126 RUN_NIGHTLY=0 00:00:58.131 [Pipeline] readFile 00:00:58.158 [Pipeline] withEnv 00:00:58.161 [Pipeline] { 00:00:58.175 [Pipeline] sh 00:00:58.461 + set -ex 00:00:58.461 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:58.461 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:58.461 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.461 ++ SPDK_TEST_NVMF=1 00:00:58.461 ++ SPDK_TEST_NVME_CLI=1 00:00:58.461 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.461 ++ SPDK_TEST_NVMF_NICS=e810 00:00:58.461 ++ SPDK_TEST_VFIOUSER=1 00:00:58.461 ++ SPDK_RUN_UBSAN=1 00:00:58.461 ++ NET_TYPE=phy 00:00:58.461 ++ RUN_NIGHTLY=0 00:00:58.461 + case $SPDK_TEST_NVMF_NICS in 00:00:58.461 + DRIVERS=ice 00:00:58.461 + [[ tcp == \r\d\m\a ]] 00:00:58.461 + [[ -n ice ]] 00:00:58.461 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:58.461 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:58.461 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:58.461 rmmod: ERROR: Module irdma is not currently loaded 00:00:58.461 rmmod: ERROR: Module i40iw is not currently loaded 00:00:58.461 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:58.461 + true 00:00:58.461 + for D in $DRIVERS 00:00:58.461 + sudo modprobe ice 00:00:58.461 + exit 0 00:00:58.472 [Pipeline] } 00:00:58.493 [Pipeline] // withEnv 00:00:58.499 [Pipeline] } 00:00:58.518 [Pipeline] // stage 00:00:58.528 [Pipeline] catchError 00:00:58.530 [Pipeline] { 00:00:58.547 [Pipeline] timeout 00:00:58.547 Timeout set to expire in 50 min 00:00:58.549 [Pipeline] { 00:00:58.567 [Pipeline] stage 00:00:58.569 [Pipeline] { (Tests) 00:00:58.590 [Pipeline] sh 00:00:58.879 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.879 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.879 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.879 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:58.879 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:58.879 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:58.879 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:58.879 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:58.879 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:58.879 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:58.879 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:58.879 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.879 + source /etc/os-release 00:00:58.879 ++ NAME='Fedora Linux' 00:00:58.879 ++ VERSION='38 (Cloud Edition)' 00:00:58.879 ++ ID=fedora 00:00:58.879 ++ VERSION_ID=38 00:00:58.879 ++ VERSION_CODENAME= 00:00:58.879 ++ PLATFORM_ID=platform:f38 00:00:58.879 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:58.879 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:58.879 ++ LOGO=fedora-logo-icon 00:00:58.879 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:58.879 ++ HOME_URL=https://fedoraproject.org/ 00:00:58.879 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:58.879 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:58.879 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:58.879 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:58.879 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:58.879 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:58.879 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:58.879 ++ SUPPORT_END=2024-05-14 00:00:58.879 ++ VARIANT='Cloud Edition' 00:00:58.879 ++ VARIANT_ID=cloud 00:00:58.879 + uname -a 00:00:58.879 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:58.879 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:02.186 Hugepages 00:01:02.186 node hugesize free / total 00:01:02.186 node0 1048576kB 0 / 0 00:01:02.186 node0 2048kB 0 / 0 00:01:02.186 node1 1048576kB 0 / 0 00:01:02.186 node1 2048kB 0 / 0 00:01:02.186 00:01:02.186 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:02.186 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:02.186 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:02.186 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:02.186 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:02.186 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:02.186 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:02.186 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:02.186 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:02.186 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:02.186 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:02.186 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:02.186 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:02.186 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:02.186 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:02.186 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:02.186 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:02.186 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:02.186 + rm -f /tmp/spdk-ld-path 00:01:02.186 + source autorun-spdk.conf 00:01:02.186 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.186 ++ SPDK_TEST_NVMF=1 00:01:02.186 ++ SPDK_TEST_NVME_CLI=1 00:01:02.186 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.186 ++ SPDK_TEST_NVMF_NICS=e810 00:01:02.186 ++ SPDK_TEST_VFIOUSER=1 00:01:02.186 ++ SPDK_RUN_UBSAN=1 00:01:02.186 ++ NET_TYPE=phy 00:01:02.186 ++ RUN_NIGHTLY=0 00:01:02.186 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:02.186 + [[ -n '' ]] 00:01:02.186 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.186 + for M in /var/spdk/build-*-manifest.txt 00:01:02.186 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:02.186 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:02.186 + for M in /var/spdk/build-*-manifest.txt 00:01:02.186 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:02.186 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:02.186 ++ uname 00:01:02.186 + [[ Linux == \L\i\n\u\x ]] 00:01:02.186 + sudo dmesg -T 00:01:02.186 + sudo dmesg --clear 00:01:02.186 + dmesg_pid=766059 00:01:02.186 + [[ Fedora Linux == FreeBSD ]] 00:01:02.186 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:02.186 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:02.186 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:02.186 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:02.186 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:02.186 + [[ -x /usr/src/fio-static/fio ]] 00:01:02.186 + export FIO_BIN=/usr/src/fio-static/fio 00:01:02.186 + sudo dmesg -Tw 00:01:02.186 + FIO_BIN=/usr/src/fio-static/fio 00:01:02.186 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:02.186 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:02.186 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:02.186 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:02.186 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:02.186 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:02.186 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:02.186 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:02.186 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.186 Test configuration: 00:01:02.186 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.186 SPDK_TEST_NVMF=1 00:01:02.186 SPDK_TEST_NVME_CLI=1 00:01:02.186 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.186 SPDK_TEST_NVMF_NICS=e810 00:01:02.186 SPDK_TEST_VFIOUSER=1 00:01:02.186 SPDK_RUN_UBSAN=1 00:01:02.186 NET_TYPE=phy 00:01:02.187 RUN_NIGHTLY=0 09:15:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:02.187 09:15:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:02.187 09:15:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:02.187 09:15:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:02.187 09:15:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.187 09:15:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.187 09:15:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.187 09:15:33 -- paths/export.sh@5 -- $ export PATH 00:01:02.187 09:15:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.187 09:15:33 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:02.187 09:15:33 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:02.187 09:15:33 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718090133.XXXXXX 00:01:02.187 09:15:33 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718090133.1NapgB 00:01:02.187 09:15:33 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:02.187 09:15:33 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:02.187 09:15:33 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:02.187 09:15:33 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:02.187 09:15:33 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:02.187 09:15:33 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:02.187 09:15:33 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:02.187 09:15:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.187 09:15:33 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:02.187 09:15:33 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:02.187 09:15:33 -- pm/common@17 -- $ local monitor 00:01:02.187 09:15:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.187 09:15:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.187 09:15:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.187 09:15:33 -- pm/common@21 -- $ date +%s 00:01:02.187 09:15:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.187 09:15:33 -- pm/common@25 -- $ sleep 1 00:01:02.187 09:15:33 -- pm/common@21 -- $ date +%s 00:01:02.187 09:15:33 -- pm/common@21 -- $ date +%s 00:01:02.187 09:15:33 -- pm/common@21 -- $ date +%s 00:01:02.187 09:15:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718090133 00:01:02.187 09:15:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718090133 00:01:02.187 09:15:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718090133 00:01:02.187 09:15:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718090133 00:01:02.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718090133_collect-vmstat.pm.log 00:01:02.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718090133_collect-cpu-load.pm.log 00:01:02.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718090133_collect-cpu-temp.pm.log 00:01:02.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718090133_collect-bmc-pm.bmc.pm.log 00:01:03.130 09:15:34 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:03.130 09:15:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:03.130 09:15:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:03.130 09:15:34 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:03.130 09:15:34 -- spdk/autobuild.sh@16 -- $ date -u 00:01:03.130 Tue Jun 11 07:15:34 AM UTC 2024 00:01:03.130 09:15:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:03.130 v24.09-pre-54-gb16523e5e 00:01:03.130 09:15:34 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:03.130 09:15:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:03.130 09:15:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:03.130 09:15:34 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:03.130 09:15:34 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:03.130 09:15:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.130 ************************************ 00:01:03.130 START TEST ubsan 00:01:03.130 ************************************ 00:01:03.131 09:15:34 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:01:03.131 using ubsan 00:01:03.131 00:01:03.131 real 0m0.001s 00:01:03.131 user 0m0.001s 00:01:03.131 sys 0m0.000s 00:01:03.131 09:15:34 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:03.131 09:15:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:03.131 ************************************ 00:01:03.131 END TEST ubsan 00:01:03.131 ************************************ 00:01:03.131 09:15:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:03.131 09:15:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:03.131 09:15:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:03.131 09:15:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:03.131 09:15:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:03.131 09:15:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:03.131 09:15:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:03.131 09:15:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:03.131 09:15:34 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:03.392 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:03.392 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:03.653 Using 'verbs' RDMA provider 00:01:19.512 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:31.751 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:31.751 Creating mk/config.mk...done. 00:01:31.751 Creating mk/cc.flags.mk...done. 00:01:31.751 Type 'make' to build. 00:01:31.751 09:16:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:31.751 09:16:03 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:31.751 09:16:03 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:31.751 09:16:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.751 ************************************ 00:01:31.751 START TEST make 00:01:31.751 ************************************ 00:01:31.751 09:16:03 make -- common/autotest_common.sh@1124 -- $ make -j144 00:01:32.013 make[1]: Nothing to be done for 'all'. 00:01:33.451 The Meson build system 00:01:33.451 Version: 1.3.1 00:01:33.451 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:33.451 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:33.451 Build type: native build 00:01:33.451 Project name: libvfio-user 00:01:33.451 Project version: 0.0.1 00:01:33.451 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:33.451 C linker for the host machine: cc ld.bfd 2.39-16 00:01:33.451 Host machine cpu family: x86_64 00:01:33.451 Host machine cpu: x86_64 00:01:33.451 Run-time dependency threads found: YES 00:01:33.451 Library dl found: YES 00:01:33.451 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:33.451 Run-time dependency json-c found: YES 0.17 00:01:33.451 Run-time dependency cmocka found: YES 1.1.7 00:01:33.451 Program pytest-3 found: NO 00:01:33.451 Program flake8 found: NO 00:01:33.451 Program misspell-fixer found: NO 00:01:33.451 Program restructuredtext-lint found: NO 00:01:33.451 Program valgrind found: YES (/usr/bin/valgrind) 00:01:33.451 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:33.451 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:33.451 Compiler for C supports arguments -Wwrite-strings: YES 00:01:33.451 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:33.451 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:33.451 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:33.451 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:33.451 Build targets in project: 8 00:01:33.451 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:33.451 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:33.451 00:01:33.451 libvfio-user 0.0.1 00:01:33.451 00:01:33.451 User defined options 00:01:33.451 buildtype : debug 00:01:33.451 default_library: shared 00:01:33.451 libdir : /usr/local/lib 00:01:33.451 00:01:33.451 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:33.710 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:33.710 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:33.710 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:33.710 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:33.710 [4/37] Compiling C object samples/null.p/null.c.o 00:01:33.710 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:33.710 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:33.710 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:33.710 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:33.710 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:33.710 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:33.710 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:33.710 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:33.710 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:33.710 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:33.710 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:33.710 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:33.710 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:33.710 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:33.710 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:33.710 [20/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:33.710 [21/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:33.710 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:33.710 [23/37] Compiling C object samples/client.p/client.c.o 00:01:33.710 [24/37] Compiling C object samples/server.p/server.c.o 00:01:33.710 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:33.710 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:33.969 [27/37] Linking target samples/client 00:01:33.969 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:33.969 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:33.969 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:33.969 [31/37] Linking target test/unit_tests 00:01:33.969 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:34.229 [33/37] Linking target samples/gpio-pci-idio-16 00:01:34.229 [34/37] Linking target samples/server 00:01:34.229 [35/37] Linking target samples/lspci 00:01:34.229 [36/37] Linking target samples/null 00:01:34.229 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:34.229 INFO: autodetecting backend as ninja 00:01:34.229 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:34.229 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:34.491 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:34.491 ninja: no work to do. 00:01:41.093 The Meson build system 00:01:41.093 Version: 1.3.1 00:01:41.093 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:41.093 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:41.093 Build type: native build 00:01:41.093 Program cat found: YES (/usr/bin/cat) 00:01:41.093 Project name: DPDK 00:01:41.093 Project version: 24.03.0 00:01:41.093 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:41.093 C linker for the host machine: cc ld.bfd 2.39-16 00:01:41.093 Host machine cpu family: x86_64 00:01:41.093 Host machine cpu: x86_64 00:01:41.093 Message: ## Building in Developer Mode ## 00:01:41.093 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:41.093 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:41.093 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:41.093 Program python3 found: YES (/usr/bin/python3) 00:01:41.093 Program cat found: YES (/usr/bin/cat) 00:01:41.093 Compiler for C supports arguments -march=native: YES 00:01:41.093 Checking for size of "void *" : 8 00:01:41.093 Checking for size of "void *" : 8 (cached) 00:01:41.093 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:41.093 Library m found: YES 00:01:41.093 Library numa found: YES 00:01:41.093 Has header "numaif.h" : YES 00:01:41.093 Library fdt found: NO 00:01:41.093 Library execinfo found: NO 00:01:41.093 Has header "execinfo.h" : YES 00:01:41.093 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:41.093 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:41.093 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:41.093 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:41.093 Run-time dependency openssl found: YES 3.0.9 00:01:41.093 Run-time dependency libpcap found: YES 1.10.4 00:01:41.093 Has header "pcap.h" with dependency libpcap: YES 00:01:41.093 Compiler for C supports arguments -Wcast-qual: YES 00:01:41.093 Compiler for C supports arguments -Wdeprecated: YES 00:01:41.093 Compiler for C supports arguments -Wformat: YES 00:01:41.093 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:41.093 Compiler for C supports arguments -Wformat-security: NO 00:01:41.093 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:41.093 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:41.093 Compiler for C supports arguments -Wnested-externs: YES 00:01:41.093 Compiler for C supports arguments -Wold-style-definition: YES 00:01:41.093 Compiler for C supports arguments -Wpointer-arith: YES 00:01:41.093 Compiler for C supports arguments -Wsign-compare: YES 00:01:41.093 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:41.093 Compiler for C supports arguments -Wundef: YES 00:01:41.093 Compiler for C supports arguments -Wwrite-strings: YES 00:01:41.093 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:41.093 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:41.093 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:41.093 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:41.093 Program objdump found: YES (/usr/bin/objdump) 00:01:41.093 Compiler for C supports arguments -mavx512f: YES 00:01:41.093 Checking if "AVX512 checking" compiles: YES 00:01:41.093 Fetching value of define "__SSE4_2__" : 1 00:01:41.093 Fetching value of define "__AES__" : 1 00:01:41.093 Fetching value of define "__AVX__" : 1 00:01:41.093 Fetching value of define "__AVX2__" : 1 00:01:41.093 Fetching value of define "__AVX512BW__" : 1 00:01:41.093 Fetching value of define "__AVX512CD__" : 1 00:01:41.093 Fetching value of define "__AVX512DQ__" : 1 00:01:41.093 Fetching value of define "__AVX512F__" : 1 00:01:41.093 Fetching value of define "__AVX512VL__" : 1 00:01:41.093 Fetching value of define "__PCLMUL__" : 1 00:01:41.093 Fetching value of define "__RDRND__" : 1 00:01:41.093 Fetching value of define "__RDSEED__" : 1 00:01:41.093 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:41.093 Fetching value of define "__znver1__" : (undefined) 00:01:41.093 Fetching value of define "__znver2__" : (undefined) 00:01:41.093 Fetching value of define "__znver3__" : (undefined) 00:01:41.093 Fetching value of define "__znver4__" : (undefined) 00:01:41.093 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:41.093 Message: lib/log: Defining dependency "log" 00:01:41.093 Message: lib/kvargs: Defining dependency "kvargs" 00:01:41.093 Message: lib/telemetry: Defining dependency "telemetry" 00:01:41.093 Checking for function "getentropy" : NO 00:01:41.093 Message: lib/eal: Defining dependency "eal" 00:01:41.093 Message: lib/ring: Defining dependency "ring" 00:01:41.093 Message: lib/rcu: Defining dependency "rcu" 00:01:41.093 Message: lib/mempool: Defining dependency "mempool" 00:01:41.093 Message: lib/mbuf: Defining dependency "mbuf" 00:01:41.093 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:41.093 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:41.093 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:41.093 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:41.093 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:41.093 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:41.093 Compiler for C supports arguments -mpclmul: YES 00:01:41.093 Compiler for C supports arguments -maes: YES 00:01:41.093 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:41.093 Compiler for C supports arguments -mavx512bw: YES 00:01:41.093 Compiler for C supports arguments -mavx512dq: YES 00:01:41.093 Compiler for C supports arguments -mavx512vl: YES 00:01:41.093 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:41.093 Compiler for C supports arguments -mavx2: YES 00:01:41.093 Compiler for C supports arguments -mavx: YES 00:01:41.093 Message: lib/net: Defining dependency "net" 00:01:41.093 Message: lib/meter: Defining dependency "meter" 00:01:41.093 Message: lib/ethdev: Defining dependency "ethdev" 00:01:41.093 Message: lib/pci: Defining dependency "pci" 00:01:41.093 Message: lib/cmdline: Defining dependency "cmdline" 00:01:41.093 Message: lib/hash: Defining dependency "hash" 00:01:41.093 Message: lib/timer: Defining dependency "timer" 00:01:41.093 Message: lib/compressdev: Defining dependency "compressdev" 00:01:41.093 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:41.093 Message: lib/dmadev: Defining dependency "dmadev" 00:01:41.093 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:41.093 Message: lib/power: Defining dependency "power" 00:01:41.093 Message: lib/reorder: Defining dependency "reorder" 00:01:41.093 Message: lib/security: Defining dependency "security" 00:01:41.093 Has header "linux/userfaultfd.h" : YES 00:01:41.093 Has header "linux/vduse.h" : YES 00:01:41.093 Message: lib/vhost: Defining dependency "vhost" 00:01:41.093 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:41.093 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:41.093 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:41.093 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:41.093 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:41.093 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:41.093 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:41.093 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:41.093 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:41.093 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:41.093 Program doxygen found: YES (/usr/bin/doxygen) 00:01:41.093 Configuring doxy-api-html.conf using configuration 00:01:41.093 Configuring doxy-api-man.conf using configuration 00:01:41.093 Program mandb found: YES (/usr/bin/mandb) 00:01:41.093 Program sphinx-build found: NO 00:01:41.093 Configuring rte_build_config.h using configuration 00:01:41.093 Message: 00:01:41.093 ================= 00:01:41.093 Applications Enabled 00:01:41.093 ================= 00:01:41.093 00:01:41.093 apps: 00:01:41.093 00:01:41.093 00:01:41.093 Message: 00:01:41.093 ================= 00:01:41.093 Libraries Enabled 00:01:41.093 ================= 00:01:41.093 00:01:41.093 libs: 00:01:41.093 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:41.093 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:41.093 cryptodev, dmadev, power, reorder, security, vhost, 00:01:41.093 00:01:41.093 Message: 00:01:41.093 =============== 00:01:41.093 Drivers Enabled 00:01:41.093 =============== 00:01:41.093 00:01:41.093 common: 00:01:41.093 00:01:41.093 bus: 00:01:41.093 pci, vdev, 00:01:41.093 mempool: 00:01:41.093 ring, 00:01:41.093 dma: 00:01:41.093 00:01:41.093 net: 00:01:41.093 00:01:41.093 crypto: 00:01:41.093 00:01:41.093 compress: 00:01:41.093 00:01:41.093 vdpa: 00:01:41.093 00:01:41.093 00:01:41.093 Message: 00:01:41.093 ================= 00:01:41.093 Content Skipped 00:01:41.093 ================= 00:01:41.093 00:01:41.093 apps: 00:01:41.093 dumpcap: explicitly disabled via build config 00:01:41.093 graph: explicitly disabled via build config 00:01:41.093 pdump: explicitly disabled via build config 00:01:41.093 proc-info: explicitly disabled via build config 00:01:41.094 test-acl: explicitly disabled via build config 00:01:41.094 test-bbdev: explicitly disabled via build config 00:01:41.094 test-cmdline: explicitly disabled via build config 00:01:41.094 test-compress-perf: explicitly disabled via build config 00:01:41.094 test-crypto-perf: explicitly disabled via build config 00:01:41.094 test-dma-perf: explicitly disabled via build config 00:01:41.094 test-eventdev: explicitly disabled via build config 00:01:41.094 test-fib: explicitly disabled via build config 00:01:41.094 test-flow-perf: explicitly disabled via build config 00:01:41.094 test-gpudev: explicitly disabled via build config 00:01:41.094 test-mldev: explicitly disabled via build config 00:01:41.094 test-pipeline: explicitly disabled via build config 00:01:41.094 test-pmd: explicitly disabled via build config 00:01:41.094 test-regex: explicitly disabled via build config 00:01:41.094 test-sad: explicitly disabled via build config 00:01:41.094 test-security-perf: explicitly disabled via build config 00:01:41.094 00:01:41.094 libs: 00:01:41.094 argparse: explicitly disabled via build config 00:01:41.094 metrics: explicitly disabled via build config 00:01:41.094 acl: explicitly disabled via build config 00:01:41.094 bbdev: explicitly disabled via build config 00:01:41.094 bitratestats: explicitly disabled via build config 00:01:41.094 bpf: explicitly disabled via build config 00:01:41.094 cfgfile: explicitly disabled via build config 00:01:41.094 distributor: explicitly disabled via build config 00:01:41.094 efd: explicitly disabled via build config 00:01:41.094 eventdev: explicitly disabled via build config 00:01:41.094 dispatcher: explicitly disabled via build config 00:01:41.094 gpudev: explicitly disabled via build config 00:01:41.094 gro: explicitly disabled via build config 00:01:41.094 gso: explicitly disabled via build config 00:01:41.094 ip_frag: explicitly disabled via build config 00:01:41.094 jobstats: explicitly disabled via build config 00:01:41.094 latencystats: explicitly disabled via build config 00:01:41.094 lpm: explicitly disabled via build config 00:01:41.094 member: explicitly disabled via build config 00:01:41.094 pcapng: explicitly disabled via build config 00:01:41.094 rawdev: explicitly disabled via build config 00:01:41.094 regexdev: explicitly disabled via build config 00:01:41.094 mldev: explicitly disabled via build config 00:01:41.094 rib: explicitly disabled via build config 00:01:41.094 sched: explicitly disabled via build config 00:01:41.094 stack: explicitly disabled via build config 00:01:41.094 ipsec: explicitly disabled via build config 00:01:41.094 pdcp: explicitly disabled via build config 00:01:41.094 fib: explicitly disabled via build config 00:01:41.094 port: explicitly disabled via build config 00:01:41.094 pdump: explicitly disabled via build config 00:01:41.094 table: explicitly disabled via build config 00:01:41.094 pipeline: explicitly disabled via build config 00:01:41.094 graph: explicitly disabled via build config 00:01:41.094 node: explicitly disabled via build config 00:01:41.094 00:01:41.094 drivers: 00:01:41.094 common/cpt: not in enabled drivers build config 00:01:41.094 common/dpaax: not in enabled drivers build config 00:01:41.094 common/iavf: not in enabled drivers build config 00:01:41.094 common/idpf: not in enabled drivers build config 00:01:41.094 common/ionic: not in enabled drivers build config 00:01:41.094 common/mvep: not in enabled drivers build config 00:01:41.094 common/octeontx: not in enabled drivers build config 00:01:41.094 bus/auxiliary: not in enabled drivers build config 00:01:41.094 bus/cdx: not in enabled drivers build config 00:01:41.094 bus/dpaa: not in enabled drivers build config 00:01:41.094 bus/fslmc: not in enabled drivers build config 00:01:41.094 bus/ifpga: not in enabled drivers build config 00:01:41.094 bus/platform: not in enabled drivers build config 00:01:41.094 bus/uacce: not in enabled drivers build config 00:01:41.094 bus/vmbus: not in enabled drivers build config 00:01:41.094 common/cnxk: not in enabled drivers build config 00:01:41.094 common/mlx5: not in enabled drivers build config 00:01:41.094 common/nfp: not in enabled drivers build config 00:01:41.094 common/nitrox: not in enabled drivers build config 00:01:41.094 common/qat: not in enabled drivers build config 00:01:41.094 common/sfc_efx: not in enabled drivers build config 00:01:41.094 mempool/bucket: not in enabled drivers build config 00:01:41.094 mempool/cnxk: not in enabled drivers build config 00:01:41.094 mempool/dpaa: not in enabled drivers build config 00:01:41.094 mempool/dpaa2: not in enabled drivers build config 00:01:41.094 mempool/octeontx: not in enabled drivers build config 00:01:41.094 mempool/stack: not in enabled drivers build config 00:01:41.094 dma/cnxk: not in enabled drivers build config 00:01:41.094 dma/dpaa: not in enabled drivers build config 00:01:41.094 dma/dpaa2: not in enabled drivers build config 00:01:41.094 dma/hisilicon: not in enabled drivers build config 00:01:41.094 dma/idxd: not in enabled drivers build config 00:01:41.094 dma/ioat: not in enabled drivers build config 00:01:41.094 dma/skeleton: not in enabled drivers build config 00:01:41.094 net/af_packet: not in enabled drivers build config 00:01:41.094 net/af_xdp: not in enabled drivers build config 00:01:41.094 net/ark: not in enabled drivers build config 00:01:41.094 net/atlantic: not in enabled drivers build config 00:01:41.094 net/avp: not in enabled drivers build config 00:01:41.094 net/axgbe: not in enabled drivers build config 00:01:41.094 net/bnx2x: not in enabled drivers build config 00:01:41.094 net/bnxt: not in enabled drivers build config 00:01:41.094 net/bonding: not in enabled drivers build config 00:01:41.094 net/cnxk: not in enabled drivers build config 00:01:41.094 net/cpfl: not in enabled drivers build config 00:01:41.094 net/cxgbe: not in enabled drivers build config 00:01:41.094 net/dpaa: not in enabled drivers build config 00:01:41.094 net/dpaa2: not in enabled drivers build config 00:01:41.094 net/e1000: not in enabled drivers build config 00:01:41.094 net/ena: not in enabled drivers build config 00:01:41.094 net/enetc: not in enabled drivers build config 00:01:41.094 net/enetfec: not in enabled drivers build config 00:01:41.094 net/enic: not in enabled drivers build config 00:01:41.094 net/failsafe: not in enabled drivers build config 00:01:41.094 net/fm10k: not in enabled drivers build config 00:01:41.094 net/gve: not in enabled drivers build config 00:01:41.094 net/hinic: not in enabled drivers build config 00:01:41.094 net/hns3: not in enabled drivers build config 00:01:41.094 net/i40e: not in enabled drivers build config 00:01:41.094 net/iavf: not in enabled drivers build config 00:01:41.094 net/ice: not in enabled drivers build config 00:01:41.094 net/idpf: not in enabled drivers build config 00:01:41.094 net/igc: not in enabled drivers build config 00:01:41.094 net/ionic: not in enabled drivers build config 00:01:41.094 net/ipn3ke: not in enabled drivers build config 00:01:41.094 net/ixgbe: not in enabled drivers build config 00:01:41.094 net/mana: not in enabled drivers build config 00:01:41.094 net/memif: not in enabled drivers build config 00:01:41.094 net/mlx4: not in enabled drivers build config 00:01:41.094 net/mlx5: not in enabled drivers build config 00:01:41.094 net/mvneta: not in enabled drivers build config 00:01:41.094 net/mvpp2: not in enabled drivers build config 00:01:41.094 net/netvsc: not in enabled drivers build config 00:01:41.094 net/nfb: not in enabled drivers build config 00:01:41.094 net/nfp: not in enabled drivers build config 00:01:41.094 net/ngbe: not in enabled drivers build config 00:01:41.094 net/null: not in enabled drivers build config 00:01:41.094 net/octeontx: not in enabled drivers build config 00:01:41.094 net/octeon_ep: not in enabled drivers build config 00:01:41.094 net/pcap: not in enabled drivers build config 00:01:41.094 net/pfe: not in enabled drivers build config 00:01:41.094 net/qede: not in enabled drivers build config 00:01:41.094 net/ring: not in enabled drivers build config 00:01:41.094 net/sfc: not in enabled drivers build config 00:01:41.094 net/softnic: not in enabled drivers build config 00:01:41.094 net/tap: not in enabled drivers build config 00:01:41.094 net/thunderx: not in enabled drivers build config 00:01:41.094 net/txgbe: not in enabled drivers build config 00:01:41.094 net/vdev_netvsc: not in enabled drivers build config 00:01:41.094 net/vhost: not in enabled drivers build config 00:01:41.094 net/virtio: not in enabled drivers build config 00:01:41.094 net/vmxnet3: not in enabled drivers build config 00:01:41.094 raw/*: missing internal dependency, "rawdev" 00:01:41.094 crypto/armv8: not in enabled drivers build config 00:01:41.094 crypto/bcmfs: not in enabled drivers build config 00:01:41.094 crypto/caam_jr: not in enabled drivers build config 00:01:41.094 crypto/ccp: not in enabled drivers build config 00:01:41.094 crypto/cnxk: not in enabled drivers build config 00:01:41.094 crypto/dpaa_sec: not in enabled drivers build config 00:01:41.094 crypto/dpaa2_sec: not in enabled drivers build config 00:01:41.094 crypto/ipsec_mb: not in enabled drivers build config 00:01:41.094 crypto/mlx5: not in enabled drivers build config 00:01:41.094 crypto/mvsam: not in enabled drivers build config 00:01:41.094 crypto/nitrox: not in enabled drivers build config 00:01:41.094 crypto/null: not in enabled drivers build config 00:01:41.094 crypto/octeontx: not in enabled drivers build config 00:01:41.094 crypto/openssl: not in enabled drivers build config 00:01:41.094 crypto/scheduler: not in enabled drivers build config 00:01:41.094 crypto/uadk: not in enabled drivers build config 00:01:41.094 crypto/virtio: not in enabled drivers build config 00:01:41.094 compress/isal: not in enabled drivers build config 00:01:41.094 compress/mlx5: not in enabled drivers build config 00:01:41.094 compress/nitrox: not in enabled drivers build config 00:01:41.094 compress/octeontx: not in enabled drivers build config 00:01:41.094 compress/zlib: not in enabled drivers build config 00:01:41.094 regex/*: missing internal dependency, "regexdev" 00:01:41.094 ml/*: missing internal dependency, "mldev" 00:01:41.094 vdpa/ifc: not in enabled drivers build config 00:01:41.094 vdpa/mlx5: not in enabled drivers build config 00:01:41.094 vdpa/nfp: not in enabled drivers build config 00:01:41.094 vdpa/sfc: not in enabled drivers build config 00:01:41.094 event/*: missing internal dependency, "eventdev" 00:01:41.094 baseband/*: missing internal dependency, "bbdev" 00:01:41.094 gpu/*: missing internal dependency, "gpudev" 00:01:41.094 00:01:41.094 00:01:41.094 Build targets in project: 84 00:01:41.094 00:01:41.094 DPDK 24.03.0 00:01:41.094 00:01:41.094 User defined options 00:01:41.094 buildtype : debug 00:01:41.094 default_library : shared 00:01:41.094 libdir : lib 00:01:41.094 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:41.094 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:41.094 c_link_args : 00:01:41.094 cpu_instruction_set: native 00:01:41.094 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:41.094 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:41.094 enable_docs : false 00:01:41.094 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:41.094 enable_kmods : false 00:01:41.094 tests : false 00:01:41.094 00:01:41.094 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:41.094 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:41.094 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:41.094 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:41.094 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:41.094 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:41.094 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:41.094 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:41.094 [7/267] Linking static target lib/librte_kvargs.a 00:01:41.094 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:41.094 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:41.094 [10/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:41.094 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:41.094 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:41.094 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:41.094 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:41.094 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:41.094 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:41.094 [17/267] Linking static target lib/librte_log.a 00:01:41.094 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:41.352 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:41.352 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:41.352 [21/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:41.352 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:41.352 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:41.352 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:41.352 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:41.352 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:41.352 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:41.352 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:41.352 [29/267] Linking static target lib/librte_pci.a 00:01:41.352 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:41.352 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:41.353 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:41.353 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:41.353 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:41.353 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:41.353 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:41.353 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:41.853 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:41.853 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:41.853 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:41.853 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:41.853 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:41.853 [43/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.853 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:41.853 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:41.853 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.853 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:41.853 [48/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:41.853 [49/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:41.853 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:41.853 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:41.853 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:41.853 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:41.853 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:41.853 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:41.853 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:41.853 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:41.853 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:41.853 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:41.853 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:41.853 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:41.853 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:41.853 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:41.853 [64/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:41.853 [65/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:41.853 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:41.853 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:41.853 [68/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:41.853 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:41.853 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:41.853 [71/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:41.853 [72/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:41.853 [73/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:41.853 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:41.853 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:41.853 [76/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:41.853 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:41.853 [78/267] Linking static target lib/librte_meter.a 00:01:41.853 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:41.853 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:41.853 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:41.853 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:41.853 [83/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:41.853 [84/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:41.853 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:41.853 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:41.853 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:41.853 [88/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:41.853 [89/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:41.853 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:41.853 [91/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:41.853 [92/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:41.853 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:41.853 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:41.853 [95/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:41.853 [96/267] Linking static target lib/librte_ring.a 00:01:41.853 [97/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:41.853 [98/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:41.853 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:41.853 [100/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:41.853 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:41.853 [102/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:41.853 [103/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:41.853 [104/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:41.853 [105/267] Linking static target lib/librte_timer.a 00:01:41.853 [106/267] Linking static target lib/librte_telemetry.a 00:01:41.853 [107/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:41.853 [108/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:41.853 [109/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:41.853 [110/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:41.853 [111/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:41.853 [112/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:41.853 [113/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:41.853 [114/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:41.853 [115/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:41.853 [116/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:41.853 [117/267] Linking static target lib/librte_dmadev.a 00:01:41.853 [118/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:41.853 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:41.853 [120/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:41.853 [121/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:41.853 [122/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:41.853 [123/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:41.853 [124/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:41.853 [125/267] Linking static target lib/librte_rcu.a 00:01:41.853 [126/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:41.853 [127/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:41.853 [128/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:41.853 [129/267] Linking static target lib/librte_cmdline.a 00:01:41.853 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:41.853 [131/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:41.853 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:41.853 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:41.853 [134/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:41.853 [135/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.853 [136/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:41.853 [137/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:41.853 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:41.853 [139/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:41.853 [140/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:41.853 [141/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:41.853 [142/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:41.853 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:41.853 [144/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:41.853 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:41.853 [146/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:41.853 [147/267] Linking static target lib/librte_net.a 00:01:41.853 [148/267] Linking target lib/librte_log.so.24.1 00:01:41.853 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:41.853 [150/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:41.853 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:41.853 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:41.853 [153/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:41.853 [154/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:41.853 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:41.853 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:41.853 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:41.853 [158/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:41.853 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:41.853 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:41.853 [161/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:41.853 [162/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:41.853 [163/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:41.853 [164/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:41.853 [165/267] Linking static target lib/librte_compressdev.a 00:01:41.853 [166/267] Linking static target lib/librte_mempool.a 00:01:41.853 [167/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:41.853 [168/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:41.853 [169/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:41.853 [170/267] Linking static target lib/librte_security.a 00:01:41.853 [171/267] Linking static target lib/librte_power.a 00:01:41.853 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:41.853 [173/267] Linking static target lib/librte_eal.a 00:01:42.111 [174/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:42.111 [175/267] Linking static target lib/librte_reorder.a 00:01:42.111 [176/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:42.111 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:42.111 [178/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:42.111 [179/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.111 [180/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:42.111 [181/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:42.111 [182/267] Linking static target lib/librte_mbuf.a 00:01:42.111 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:42.111 [184/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:42.111 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:42.111 [186/267] Linking target lib/librte_kvargs.so.24.1 00:01:42.111 [187/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.111 [188/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:42.111 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:42.111 [190/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:42.111 [191/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.112 [192/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:42.112 [193/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:42.112 [194/267] Linking static target drivers/librte_bus_vdev.a 00:01:42.112 [195/267] Linking static target lib/librte_hash.a 00:01:42.112 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:42.112 [197/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:42.112 [198/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:42.112 [199/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.112 [200/267] Linking static target drivers/librte_bus_pci.a 00:01:42.112 [201/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:42.371 [202/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:42.371 [203/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:42.371 [204/267] Linking static target drivers/librte_mempool_ring.a 00:01:42.371 [205/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.371 [206/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.371 [207/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:42.371 [208/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:42.371 [209/267] Linking static target lib/librte_cryptodev.a 00:01:42.371 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.371 [211/267] Linking target lib/librte_telemetry.so.24.1 00:01:42.631 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.631 [213/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.631 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.631 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:42.631 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.631 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:42.631 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.892 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:42.892 [220/267] Linking static target lib/librte_ethdev.a 00:01:42.892 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.892 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.892 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.152 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.152 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.152 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.723 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:43.984 [228/267] Linking static target lib/librte_vhost.a 00:01:44.556 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.941 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.530 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.919 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.919 [233/267] Linking target lib/librte_eal.so.24.1 00:01:53.919 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:53.919 [235/267] Linking target lib/librte_meter.so.24.1 00:01:53.919 [236/267] Linking target lib/librte_pci.so.24.1 00:01:53.919 [237/267] Linking target lib/librte_ring.so.24.1 00:01:53.919 [238/267] Linking target lib/librte_timer.so.24.1 00:01:53.919 [239/267] Linking target lib/librte_dmadev.so.24.1 00:01:53.919 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:53.919 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:54.180 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:54.180 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:54.180 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:54.180 [245/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:54.180 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:54.180 [247/267] Linking target lib/librte_rcu.so.24.1 00:01:54.180 [248/267] Linking target lib/librte_mempool.so.24.1 00:01:54.180 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:54.180 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:54.441 [251/267] Linking target lib/librte_mbuf.so.24.1 00:01:54.441 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:54.441 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:54.441 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:01:54.441 [255/267] Linking target lib/librte_reorder.so.24.1 00:01:54.441 [256/267] Linking target lib/librte_compressdev.so.24.1 00:01:54.441 [257/267] Linking target lib/librte_net.so.24.1 00:01:54.702 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:54.702 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:54.702 [260/267] Linking target lib/librte_hash.so.24.1 00:01:54.702 [261/267] Linking target lib/librte_security.so.24.1 00:01:54.702 [262/267] Linking target lib/librte_cmdline.so.24.1 00:01:54.702 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:54.963 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:54.963 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:54.963 [266/267] Linking target lib/librte_power.so.24.1 00:01:54.963 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:54.963 INFO: autodetecting backend as ninja 00:01:54.963 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:56.349 CC lib/ut_mock/mock.o 00:01:56.349 CC lib/ut/ut.o 00:01:56.349 CC lib/log/log.o 00:01:56.349 CC lib/log/log_flags.o 00:01:56.349 CC lib/log/log_deprecated.o 00:01:56.349 LIB libspdk_ut_mock.a 00:01:56.349 LIB libspdk_ut.a 00:01:56.349 LIB libspdk_log.a 00:01:56.349 SO libspdk_ut.so.2.0 00:01:56.349 SO libspdk_ut_mock.so.6.0 00:01:56.349 SO libspdk_log.so.7.0 00:01:56.648 SYMLINK libspdk_ut.so 00:01:56.648 SYMLINK libspdk_ut_mock.so 00:01:56.648 SYMLINK libspdk_log.so 00:01:56.910 CC lib/dma/dma.o 00:01:56.910 CC lib/util/base64.o 00:01:56.910 CC lib/ioat/ioat.o 00:01:56.910 CC lib/util/bit_array.o 00:01:56.910 CXX lib/trace_parser/trace.o 00:01:56.910 CC lib/util/cpuset.o 00:01:56.910 CC lib/util/crc16.o 00:01:56.910 CC lib/util/crc32.o 00:01:56.910 CC lib/util/crc32c.o 00:01:56.910 CC lib/util/crc32_ieee.o 00:01:56.910 CC lib/util/crc64.o 00:01:56.910 CC lib/util/dif.o 00:01:56.910 CC lib/util/fd.o 00:01:56.910 CC lib/util/file.o 00:01:56.910 CC lib/util/hexlify.o 00:01:56.910 CC lib/util/iov.o 00:01:56.910 CC lib/util/math.o 00:01:56.910 CC lib/util/pipe.o 00:01:56.910 CC lib/util/strerror_tls.o 00:01:56.910 CC lib/util/string.o 00:01:56.910 CC lib/util/uuid.o 00:01:56.910 CC lib/util/fd_group.o 00:01:56.910 CC lib/util/xor.o 00:01:56.910 CC lib/util/zipf.o 00:01:57.171 CC lib/vfio_user/host/vfio_user.o 00:01:57.171 CC lib/vfio_user/host/vfio_user_pci.o 00:01:57.171 LIB libspdk_dma.a 00:01:57.171 SO libspdk_dma.so.4.0 00:01:57.171 SYMLINK libspdk_dma.so 00:01:57.171 LIB libspdk_ioat.a 00:01:57.171 SO libspdk_ioat.so.7.0 00:01:57.433 LIB libspdk_vfio_user.a 00:01:57.433 SYMLINK libspdk_ioat.so 00:01:57.433 SO libspdk_vfio_user.so.5.0 00:01:57.433 LIB libspdk_util.a 00:01:57.433 SYMLINK libspdk_vfio_user.so 00:01:57.433 SO libspdk_util.so.9.0 00:01:57.694 SYMLINK libspdk_util.so 00:01:57.694 LIB libspdk_trace_parser.a 00:01:57.694 SO libspdk_trace_parser.so.5.0 00:01:57.955 SYMLINK libspdk_trace_parser.so 00:01:57.955 CC lib/env_dpdk/env.o 00:01:57.955 CC lib/env_dpdk/memory.o 00:01:57.955 CC lib/env_dpdk/init.o 00:01:57.955 CC lib/env_dpdk/pci.o 00:01:57.955 CC lib/env_dpdk/threads.o 00:01:57.955 CC lib/env_dpdk/pci_ioat.o 00:01:57.955 CC lib/env_dpdk/pci_virtio.o 00:01:57.955 CC lib/env_dpdk/pci_vmd.o 00:01:57.955 CC lib/env_dpdk/pci_idxd.o 00:01:57.955 CC lib/env_dpdk/pci_event.o 00:01:57.955 CC lib/env_dpdk/sigbus_handler.o 00:01:57.955 CC lib/env_dpdk/pci_dpdk.o 00:01:57.955 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:57.955 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:57.955 CC lib/json/json_parse.o 00:01:57.955 CC lib/json/json_util.o 00:01:57.955 CC lib/json/json_write.o 00:01:57.955 CC lib/vmd/vmd.o 00:01:57.955 CC lib/vmd/led.o 00:01:57.955 CC lib/conf/conf.o 00:01:57.955 CC lib/rdma/common.o 00:01:57.955 CC lib/rdma/rdma_verbs.o 00:01:57.955 CC lib/idxd/idxd.o 00:01:57.955 CC lib/idxd/idxd_user.o 00:01:57.955 CC lib/idxd/idxd_kernel.o 00:01:58.215 LIB libspdk_conf.a 00:01:58.215 SO libspdk_conf.so.6.0 00:01:58.215 LIB libspdk_rdma.a 00:01:58.215 LIB libspdk_json.a 00:01:58.215 SO libspdk_rdma.so.6.0 00:01:58.215 SYMLINK libspdk_conf.so 00:01:58.215 SO libspdk_json.so.6.0 00:01:58.476 SYMLINK libspdk_rdma.so 00:01:58.476 SYMLINK libspdk_json.so 00:01:58.476 LIB libspdk_idxd.a 00:01:58.476 SO libspdk_idxd.so.12.0 00:01:58.476 LIB libspdk_vmd.a 00:01:58.737 SO libspdk_vmd.so.6.0 00:01:58.737 SYMLINK libspdk_idxd.so 00:01:58.737 SYMLINK libspdk_vmd.so 00:01:58.737 CC lib/jsonrpc/jsonrpc_server.o 00:01:58.737 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:58.737 CC lib/jsonrpc/jsonrpc_client.o 00:01:58.737 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:58.997 LIB libspdk_jsonrpc.a 00:01:58.997 SO libspdk_jsonrpc.so.6.0 00:01:58.997 SYMLINK libspdk_jsonrpc.so 00:01:59.258 LIB libspdk_env_dpdk.a 00:01:59.258 SO libspdk_env_dpdk.so.14.1 00:01:59.519 SYMLINK libspdk_env_dpdk.so 00:01:59.519 CC lib/rpc/rpc.o 00:01:59.780 LIB libspdk_rpc.a 00:01:59.780 SO libspdk_rpc.so.6.0 00:01:59.780 SYMLINK libspdk_rpc.so 00:02:00.041 CC lib/trace/trace.o 00:02:00.041 CC lib/trace/trace_flags.o 00:02:00.041 CC lib/trace/trace_rpc.o 00:02:00.041 CC lib/notify/notify.o 00:02:00.041 CC lib/notify/notify_rpc.o 00:02:00.041 CC lib/keyring/keyring.o 00:02:00.041 CC lib/keyring/keyring_rpc.o 00:02:00.302 LIB libspdk_notify.a 00:02:00.302 SO libspdk_notify.so.6.0 00:02:00.302 LIB libspdk_keyring.a 00:02:00.302 LIB libspdk_trace.a 00:02:00.302 SO libspdk_keyring.so.1.0 00:02:00.302 SO libspdk_trace.so.10.0 00:02:00.302 SYMLINK libspdk_notify.so 00:02:00.562 SYMLINK libspdk_keyring.so 00:02:00.562 SYMLINK libspdk_trace.so 00:02:00.821 CC lib/thread/thread.o 00:02:00.821 CC lib/thread/iobuf.o 00:02:00.821 CC lib/sock/sock.o 00:02:00.821 CC lib/sock/sock_rpc.o 00:02:01.082 LIB libspdk_sock.a 00:02:01.343 SO libspdk_sock.so.9.0 00:02:01.343 SYMLINK libspdk_sock.so 00:02:01.603 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:01.603 CC lib/nvme/nvme_ctrlr.o 00:02:01.603 CC lib/nvme/nvme_fabric.o 00:02:01.603 CC lib/nvme/nvme_ns_cmd.o 00:02:01.603 CC lib/nvme/nvme_ns.o 00:02:01.603 CC lib/nvme/nvme_pcie_common.o 00:02:01.603 CC lib/nvme/nvme_pcie.o 00:02:01.603 CC lib/nvme/nvme_qpair.o 00:02:01.603 CC lib/nvme/nvme.o 00:02:01.603 CC lib/nvme/nvme_quirks.o 00:02:01.603 CC lib/nvme/nvme_transport.o 00:02:01.603 CC lib/nvme/nvme_discovery.o 00:02:01.603 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:01.603 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:01.603 CC lib/nvme/nvme_tcp.o 00:02:01.603 CC lib/nvme/nvme_opal.o 00:02:01.603 CC lib/nvme/nvme_io_msg.o 00:02:01.603 CC lib/nvme/nvme_poll_group.o 00:02:01.603 CC lib/nvme/nvme_zns.o 00:02:01.603 CC lib/nvme/nvme_stubs.o 00:02:01.603 CC lib/nvme/nvme_auth.o 00:02:01.603 CC lib/nvme/nvme_cuse.o 00:02:01.603 CC lib/nvme/nvme_vfio_user.o 00:02:01.603 CC lib/nvme/nvme_rdma.o 00:02:02.170 LIB libspdk_thread.a 00:02:02.170 SO libspdk_thread.so.10.0 00:02:02.170 SYMLINK libspdk_thread.so 00:02:02.431 CC lib/blob/blobstore.o 00:02:02.431 CC lib/blob/zeroes.o 00:02:02.431 CC lib/blob/request.o 00:02:02.690 CC lib/blob/blob_bs_dev.o 00:02:02.690 CC lib/accel/accel.o 00:02:02.690 CC lib/accel/accel_rpc.o 00:02:02.690 CC lib/accel/accel_sw.o 00:02:02.690 CC lib/vfu_tgt/tgt_endpoint.o 00:02:02.690 CC lib/vfu_tgt/tgt_rpc.o 00:02:02.690 CC lib/virtio/virtio.o 00:02:02.690 CC lib/virtio/virtio_vhost_user.o 00:02:02.690 CC lib/virtio/virtio_vfio_user.o 00:02:02.690 CC lib/virtio/virtio_pci.o 00:02:02.690 CC lib/init/json_config.o 00:02:02.690 CC lib/init/subsystem.o 00:02:02.690 CC lib/init/subsystem_rpc.o 00:02:02.690 CC lib/init/rpc.o 00:02:02.950 LIB libspdk_init.a 00:02:02.950 LIB libspdk_vfu_tgt.a 00:02:02.950 SO libspdk_init.so.5.0 00:02:02.950 LIB libspdk_virtio.a 00:02:02.950 SO libspdk_vfu_tgt.so.3.0 00:02:02.950 SYMLINK libspdk_init.so 00:02:02.950 SO libspdk_virtio.so.7.0 00:02:02.950 SYMLINK libspdk_vfu_tgt.so 00:02:02.950 SYMLINK libspdk_virtio.so 00:02:03.209 CC lib/event/app.o 00:02:03.209 CC lib/event/reactor.o 00:02:03.209 CC lib/event/log_rpc.o 00:02:03.209 CC lib/event/app_rpc.o 00:02:03.209 CC lib/event/scheduler_static.o 00:02:03.469 LIB libspdk_accel.a 00:02:03.469 SO libspdk_accel.so.15.0 00:02:03.469 LIB libspdk_nvme.a 00:02:03.469 SYMLINK libspdk_accel.so 00:02:03.730 SO libspdk_nvme.so.13.0 00:02:03.730 LIB libspdk_event.a 00:02:03.730 SO libspdk_event.so.13.1 00:02:03.730 SYMLINK libspdk_event.so 00:02:03.990 CC lib/bdev/bdev.o 00:02:03.990 CC lib/bdev/bdev_rpc.o 00:02:03.990 CC lib/bdev/bdev_zone.o 00:02:03.990 CC lib/bdev/part.o 00:02:03.990 CC lib/bdev/scsi_nvme.o 00:02:03.990 SYMLINK libspdk_nvme.so 00:02:05.376 LIB libspdk_blob.a 00:02:05.376 SO libspdk_blob.so.11.0 00:02:05.376 SYMLINK libspdk_blob.so 00:02:05.637 CC lib/lvol/lvol.o 00:02:05.637 CC lib/blobfs/blobfs.o 00:02:05.637 CC lib/blobfs/tree.o 00:02:06.208 LIB libspdk_bdev.a 00:02:06.208 SO libspdk_bdev.so.15.0 00:02:06.208 SYMLINK libspdk_bdev.so 00:02:06.208 LIB libspdk_blobfs.a 00:02:06.468 SO libspdk_blobfs.so.10.0 00:02:06.468 LIB libspdk_lvol.a 00:02:06.468 SYMLINK libspdk_blobfs.so 00:02:06.468 SO libspdk_lvol.so.10.0 00:02:06.468 SYMLINK libspdk_lvol.so 00:02:06.728 CC lib/scsi/dev.o 00:02:06.728 CC lib/scsi/lun.o 00:02:06.728 CC lib/scsi/port.o 00:02:06.728 CC lib/scsi/scsi.o 00:02:06.728 CC lib/scsi/scsi_bdev.o 00:02:06.728 CC lib/scsi/scsi_pr.o 00:02:06.728 CC lib/scsi/scsi_rpc.o 00:02:06.728 CC lib/scsi/task.o 00:02:06.728 CC lib/nbd/nbd.o 00:02:06.728 CC lib/nbd/nbd_rpc.o 00:02:06.728 CC lib/ftl/ftl_core.o 00:02:06.728 CC lib/ftl/ftl_init.o 00:02:06.728 CC lib/ftl/ftl_layout.o 00:02:06.728 CC lib/ftl/ftl_debug.o 00:02:06.728 CC lib/ftl/ftl_io.o 00:02:06.728 CC lib/ftl/ftl_sb.o 00:02:06.728 CC lib/nvmf/ctrlr.o 00:02:06.728 CC lib/ftl/ftl_l2p.o 00:02:06.728 CC lib/ftl/ftl_l2p_flat.o 00:02:06.728 CC lib/nvmf/ctrlr_discovery.o 00:02:06.728 CC lib/ftl/ftl_nv_cache.o 00:02:06.728 CC lib/nvmf/ctrlr_bdev.o 00:02:06.728 CC lib/ftl/ftl_band.o 00:02:06.728 CC lib/nvmf/subsystem.o 00:02:06.728 CC lib/ftl/ftl_band_ops.o 00:02:06.728 CC lib/nvmf/nvmf.o 00:02:06.728 CC lib/ftl/ftl_writer.o 00:02:06.728 CC lib/nvmf/nvmf_rpc.o 00:02:06.728 CC lib/ftl/ftl_rq.o 00:02:06.728 CC lib/nvmf/transport.o 00:02:06.728 CC lib/ublk/ublk.o 00:02:06.728 CC lib/ftl/ftl_reloc.o 00:02:06.728 CC lib/nvmf/tcp.o 00:02:06.728 CC lib/ublk/ublk_rpc.o 00:02:06.728 CC lib/ftl/ftl_l2p_cache.o 00:02:06.728 CC lib/nvmf/stubs.o 00:02:06.728 CC lib/ftl/ftl_p2l.o 00:02:06.729 CC lib/nvmf/mdns_server.o 00:02:06.729 CC lib/ftl/mngt/ftl_mngt.o 00:02:06.729 CC lib/nvmf/vfio_user.o 00:02:06.729 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:06.729 CC lib/nvmf/rdma.o 00:02:06.729 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:06.729 CC lib/nvmf/auth.o 00:02:06.729 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:06.729 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:06.729 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:06.729 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:06.729 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:06.729 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:06.729 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:06.729 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:06.729 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:06.729 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:06.729 CC lib/ftl/utils/ftl_conf.o 00:02:06.729 CC lib/ftl/utils/ftl_md.o 00:02:06.729 CC lib/ftl/utils/ftl_mempool.o 00:02:06.729 CC lib/ftl/utils/ftl_property.o 00:02:06.729 CC lib/ftl/utils/ftl_bitmap.o 00:02:06.729 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:06.729 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:06.729 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:06.729 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:06.729 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:06.729 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:06.729 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:06.729 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:06.729 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:06.729 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:06.729 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:06.729 CC lib/ftl/base/ftl_base_dev.o 00:02:06.729 CC lib/ftl/base/ftl_base_bdev.o 00:02:06.729 CC lib/ftl/ftl_trace.o 00:02:06.987 LIB libspdk_nbd.a 00:02:07.246 SO libspdk_nbd.so.7.0 00:02:07.246 LIB libspdk_scsi.a 00:02:07.246 SYMLINK libspdk_nbd.so 00:02:07.246 SO libspdk_scsi.so.9.0 00:02:07.246 SYMLINK libspdk_scsi.so 00:02:07.246 LIB libspdk_ublk.a 00:02:07.506 SO libspdk_ublk.so.3.0 00:02:07.506 SYMLINK libspdk_ublk.so 00:02:07.766 LIB libspdk_ftl.a 00:02:07.766 CC lib/vhost/vhost.o 00:02:07.766 CC lib/vhost/vhost_rpc.o 00:02:07.766 CC lib/vhost/vhost_scsi.o 00:02:07.766 CC lib/vhost/vhost_blk.o 00:02:07.766 CC lib/vhost/rte_vhost_user.o 00:02:07.766 CC lib/iscsi/conn.o 00:02:07.766 CC lib/iscsi/init_grp.o 00:02:07.766 CC lib/iscsi/iscsi.o 00:02:07.766 CC lib/iscsi/md5.o 00:02:07.766 CC lib/iscsi/param.o 00:02:07.766 CC lib/iscsi/portal_grp.o 00:02:07.766 CC lib/iscsi/tgt_node.o 00:02:07.766 CC lib/iscsi/iscsi_subsystem.o 00:02:07.766 CC lib/iscsi/iscsi_rpc.o 00:02:07.766 CC lib/iscsi/task.o 00:02:07.766 SO libspdk_ftl.so.9.0 00:02:08.339 SYMLINK libspdk_ftl.so 00:02:08.600 LIB libspdk_nvmf.a 00:02:08.600 SO libspdk_nvmf.so.18.1 00:02:08.600 LIB libspdk_vhost.a 00:02:08.600 SO libspdk_vhost.so.8.0 00:02:08.861 SYMLINK libspdk_vhost.so 00:02:08.861 SYMLINK libspdk_nvmf.so 00:02:08.861 LIB libspdk_iscsi.a 00:02:08.861 SO libspdk_iscsi.so.8.0 00:02:09.122 SYMLINK libspdk_iscsi.so 00:02:09.694 CC module/env_dpdk/env_dpdk_rpc.o 00:02:09.694 CC module/vfu_device/vfu_virtio.o 00:02:09.694 CC module/vfu_device/vfu_virtio_blk.o 00:02:09.694 CC module/vfu_device/vfu_virtio_scsi.o 00:02:09.694 CC module/vfu_device/vfu_virtio_rpc.o 00:02:09.956 LIB libspdk_env_dpdk_rpc.a 00:02:09.956 CC module/keyring/file/keyring.o 00:02:09.956 CC module/keyring/file/keyring_rpc.o 00:02:09.956 CC module/accel/ioat/accel_ioat.o 00:02:09.956 CC module/accel/ioat/accel_ioat_rpc.o 00:02:09.956 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:09.956 CC module/accel/error/accel_error.o 00:02:09.956 CC module/blob/bdev/blob_bdev.o 00:02:09.956 CC module/accel/error/accel_error_rpc.o 00:02:09.956 CC module/scheduler/gscheduler/gscheduler.o 00:02:09.956 CC module/sock/posix/posix.o 00:02:09.956 CC module/keyring/linux/keyring.o 00:02:09.956 CC module/keyring/linux/keyring_rpc.o 00:02:09.956 CC module/accel/dsa/accel_dsa.o 00:02:09.956 CC module/accel/dsa/accel_dsa_rpc.o 00:02:09.956 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:09.956 CC module/accel/iaa/accel_iaa.o 00:02:09.956 CC module/accel/iaa/accel_iaa_rpc.o 00:02:09.956 SO libspdk_env_dpdk_rpc.so.6.0 00:02:09.956 SYMLINK libspdk_env_dpdk_rpc.so 00:02:09.956 LIB libspdk_keyring_file.a 00:02:09.956 LIB libspdk_scheduler_gscheduler.a 00:02:09.956 LIB libspdk_keyring_linux.a 00:02:09.956 LIB libspdk_accel_ioat.a 00:02:09.956 LIB libspdk_accel_error.a 00:02:09.956 LIB libspdk_scheduler_dpdk_governor.a 00:02:09.956 LIB libspdk_scheduler_dynamic.a 00:02:10.218 SO libspdk_keyring_file.so.1.0 00:02:10.218 SO libspdk_scheduler_gscheduler.so.4.0 00:02:10.218 SO libspdk_keyring_linux.so.1.0 00:02:10.218 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:10.218 SO libspdk_accel_ioat.so.6.0 00:02:10.218 SO libspdk_accel_error.so.2.0 00:02:10.218 SO libspdk_scheduler_dynamic.so.4.0 00:02:10.218 LIB libspdk_accel_iaa.a 00:02:10.218 LIB libspdk_blob_bdev.a 00:02:10.218 SYMLINK libspdk_scheduler_gscheduler.so 00:02:10.218 SO libspdk_accel_iaa.so.3.0 00:02:10.218 LIB libspdk_accel_dsa.a 00:02:10.218 SYMLINK libspdk_keyring_file.so 00:02:10.218 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:10.218 SYMLINK libspdk_accel_ioat.so 00:02:10.218 SYMLINK libspdk_keyring_linux.so 00:02:10.218 SO libspdk_blob_bdev.so.11.0 00:02:10.218 SYMLINK libspdk_scheduler_dynamic.so 00:02:10.218 SYMLINK libspdk_accel_error.so 00:02:10.218 SO libspdk_accel_dsa.so.5.0 00:02:10.218 SYMLINK libspdk_accel_iaa.so 00:02:10.218 SYMLINK libspdk_blob_bdev.so 00:02:10.218 SYMLINK libspdk_accel_dsa.so 00:02:10.218 LIB libspdk_vfu_device.a 00:02:10.218 SO libspdk_vfu_device.so.3.0 00:02:10.481 SYMLINK libspdk_vfu_device.so 00:02:10.481 LIB libspdk_sock_posix.a 00:02:10.745 SO libspdk_sock_posix.so.6.0 00:02:10.745 SYMLINK libspdk_sock_posix.so 00:02:10.745 CC module/bdev/error/vbdev_error.o 00:02:10.745 CC module/bdev/lvol/vbdev_lvol.o 00:02:10.745 CC module/bdev/error/vbdev_error_rpc.o 00:02:10.745 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:10.745 CC module/bdev/nvme/bdev_nvme.o 00:02:10.745 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:10.745 CC module/bdev/nvme/nvme_rpc.o 00:02:10.745 CC module/bdev/nvme/bdev_mdns_client.o 00:02:10.745 CC module/bdev/nvme/vbdev_opal.o 00:02:10.745 CC module/bdev/malloc/bdev_malloc.o 00:02:10.745 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:10.745 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:10.745 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:10.745 CC module/bdev/null/bdev_null.o 00:02:10.745 CC module/bdev/null/bdev_null_rpc.o 00:02:10.745 CC module/bdev/raid/bdev_raid.o 00:02:10.745 CC module/bdev/raid/bdev_raid_rpc.o 00:02:10.745 CC module/bdev/gpt/gpt.o 00:02:10.745 CC module/bdev/raid/bdev_raid_sb.o 00:02:10.745 CC module/bdev/gpt/vbdev_gpt.o 00:02:10.745 CC module/bdev/raid/raid0.o 00:02:10.745 CC module/bdev/raid/raid1.o 00:02:10.745 CC module/bdev/raid/concat.o 00:02:10.745 CC module/bdev/delay/vbdev_delay.o 00:02:10.745 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:10.745 CC module/bdev/split/vbdev_split.o 00:02:10.745 CC module/bdev/split/vbdev_split_rpc.o 00:02:10.745 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:10.745 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:10.745 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:10.745 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:10.745 CC module/blobfs/bdev/blobfs_bdev.o 00:02:10.745 CC module/bdev/aio/bdev_aio.o 00:02:10.745 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:10.745 CC module/bdev/aio/bdev_aio_rpc.o 00:02:10.745 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:10.745 CC module/bdev/iscsi/bdev_iscsi.o 00:02:10.745 CC module/bdev/ftl/bdev_ftl.o 00:02:10.745 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:10.745 CC module/bdev/passthru/vbdev_passthru.o 00:02:10.745 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:10.745 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:11.070 LIB libspdk_blobfs_bdev.a 00:02:11.070 LIB libspdk_bdev_error.a 00:02:11.070 LIB libspdk_bdev_gpt.a 00:02:11.070 SO libspdk_blobfs_bdev.so.6.0 00:02:11.070 LIB libspdk_bdev_split.a 00:02:11.070 LIB libspdk_bdev_null.a 00:02:11.070 SO libspdk_bdev_error.so.6.0 00:02:11.070 SO libspdk_bdev_gpt.so.6.0 00:02:11.070 LIB libspdk_bdev_passthru.a 00:02:11.070 LIB libspdk_bdev_ftl.a 00:02:11.070 SO libspdk_bdev_split.so.6.0 00:02:11.070 SO libspdk_bdev_null.so.6.0 00:02:11.070 LIB libspdk_bdev_zone_block.a 00:02:11.070 SYMLINK libspdk_blobfs_bdev.so 00:02:11.070 LIB libspdk_bdev_aio.a 00:02:11.070 SO libspdk_bdev_passthru.so.6.0 00:02:11.070 LIB libspdk_bdev_malloc.a 00:02:11.330 SO libspdk_bdev_ftl.so.6.0 00:02:11.330 SYMLINK libspdk_bdev_error.so 00:02:11.330 LIB libspdk_bdev_iscsi.a 00:02:11.330 LIB libspdk_bdev_delay.a 00:02:11.330 SYMLINK libspdk_bdev_gpt.so 00:02:11.330 SO libspdk_bdev_zone_block.so.6.0 00:02:11.330 SO libspdk_bdev_aio.so.6.0 00:02:11.330 SO libspdk_bdev_malloc.so.6.0 00:02:11.330 SYMLINK libspdk_bdev_split.so 00:02:11.330 SYMLINK libspdk_bdev_null.so 00:02:11.330 SO libspdk_bdev_iscsi.so.6.0 00:02:11.330 SYMLINK libspdk_bdev_passthru.so 00:02:11.330 SO libspdk_bdev_delay.so.6.0 00:02:11.330 SYMLINK libspdk_bdev_ftl.so 00:02:11.330 SYMLINK libspdk_bdev_malloc.so 00:02:11.330 SYMLINK libspdk_bdev_zone_block.so 00:02:11.330 SYMLINK libspdk_bdev_aio.so 00:02:11.330 LIB libspdk_bdev_lvol.a 00:02:11.330 SYMLINK libspdk_bdev_iscsi.so 00:02:11.330 LIB libspdk_bdev_virtio.a 00:02:11.330 SYMLINK libspdk_bdev_delay.so 00:02:11.330 SO libspdk_bdev_lvol.so.6.0 00:02:11.330 SO libspdk_bdev_virtio.so.6.0 00:02:11.330 SYMLINK libspdk_bdev_lvol.so 00:02:11.591 SYMLINK libspdk_bdev_virtio.so 00:02:11.591 LIB libspdk_bdev_raid.a 00:02:11.851 SO libspdk_bdev_raid.so.6.0 00:02:11.851 SYMLINK libspdk_bdev_raid.so 00:02:12.795 LIB libspdk_bdev_nvme.a 00:02:12.795 SO libspdk_bdev_nvme.so.7.0 00:02:12.795 SYMLINK libspdk_bdev_nvme.so 00:02:13.738 CC module/event/subsystems/iobuf/iobuf.o 00:02:13.738 CC module/event/subsystems/keyring/keyring.o 00:02:13.738 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:13.738 CC module/event/subsystems/vmd/vmd.o 00:02:13.738 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:13.738 CC module/event/subsystems/sock/sock.o 00:02:13.738 CC module/event/subsystems/scheduler/scheduler.o 00:02:13.738 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:13.738 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:13.738 LIB libspdk_event_keyring.a 00:02:13.738 LIB libspdk_event_vhost_blk.a 00:02:13.738 LIB libspdk_event_sock.a 00:02:13.738 LIB libspdk_event_iobuf.a 00:02:13.738 LIB libspdk_event_vmd.a 00:02:13.738 LIB libspdk_event_scheduler.a 00:02:13.738 LIB libspdk_event_vfu_tgt.a 00:02:13.738 SO libspdk_event_keyring.so.1.0 00:02:13.738 SO libspdk_event_sock.so.5.0 00:02:13.738 SO libspdk_event_vhost_blk.so.3.0 00:02:13.738 SO libspdk_event_scheduler.so.4.0 00:02:13.738 SO libspdk_event_iobuf.so.3.0 00:02:13.738 SO libspdk_event_vmd.so.6.0 00:02:13.738 SO libspdk_event_vfu_tgt.so.3.0 00:02:13.738 SYMLINK libspdk_event_vhost_blk.so 00:02:13.738 SYMLINK libspdk_event_keyring.so 00:02:13.738 SYMLINK libspdk_event_sock.so 00:02:13.999 SYMLINK libspdk_event_scheduler.so 00:02:13.999 SYMLINK libspdk_event_vfu_tgt.so 00:02:13.999 SYMLINK libspdk_event_iobuf.so 00:02:13.999 SYMLINK libspdk_event_vmd.so 00:02:14.261 CC module/event/subsystems/accel/accel.o 00:02:14.261 LIB libspdk_event_accel.a 00:02:14.521 SO libspdk_event_accel.so.6.0 00:02:14.521 SYMLINK libspdk_event_accel.so 00:02:14.782 CC module/event/subsystems/bdev/bdev.o 00:02:15.043 LIB libspdk_event_bdev.a 00:02:15.043 SO libspdk_event_bdev.so.6.0 00:02:15.043 SYMLINK libspdk_event_bdev.so 00:02:15.615 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:15.615 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:15.615 CC module/event/subsystems/scsi/scsi.o 00:02:15.615 CC module/event/subsystems/nbd/nbd.o 00:02:15.615 CC module/event/subsystems/ublk/ublk.o 00:02:15.615 LIB libspdk_event_nbd.a 00:02:15.615 LIB libspdk_event_ublk.a 00:02:15.615 LIB libspdk_event_scsi.a 00:02:15.615 LIB libspdk_event_nvmf.a 00:02:15.615 SO libspdk_event_nbd.so.6.0 00:02:15.615 SO libspdk_event_ublk.so.3.0 00:02:15.615 SO libspdk_event_scsi.so.6.0 00:02:15.615 SO libspdk_event_nvmf.so.6.0 00:02:15.615 SYMLINK libspdk_event_nbd.so 00:02:15.615 SYMLINK libspdk_event_ublk.so 00:02:15.875 SYMLINK libspdk_event_scsi.so 00:02:15.875 SYMLINK libspdk_event_nvmf.so 00:02:16.136 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:16.136 CC module/event/subsystems/iscsi/iscsi.o 00:02:16.136 LIB libspdk_event_vhost_scsi.a 00:02:16.397 SO libspdk_event_vhost_scsi.so.3.0 00:02:16.397 LIB libspdk_event_iscsi.a 00:02:16.397 SO libspdk_event_iscsi.so.6.0 00:02:16.397 SYMLINK libspdk_event_vhost_scsi.so 00:02:16.397 SYMLINK libspdk_event_iscsi.so 00:02:16.659 SO libspdk.so.6.0 00:02:16.659 SYMLINK libspdk.so 00:02:16.920 CC app/trace_record/trace_record.o 00:02:16.920 CC app/spdk_nvme_discover/discovery_aer.o 00:02:16.920 CC app/spdk_nvme_perf/perf.o 00:02:16.920 CC app/spdk_lspci/spdk_lspci.o 00:02:16.920 CC app/spdk_nvme_identify/identify.o 00:02:16.920 CC app/spdk_top/spdk_top.o 00:02:16.920 CXX app/trace/trace.o 00:02:16.920 TEST_HEADER include/spdk/accel.h 00:02:16.920 CC test/rpc_client/rpc_client_test.o 00:02:16.920 TEST_HEADER include/spdk/accel_module.h 00:02:16.920 TEST_HEADER include/spdk/bdev_module.h 00:02:16.920 TEST_HEADER include/spdk/assert.h 00:02:16.920 TEST_HEADER include/spdk/bdev.h 00:02:16.920 TEST_HEADER include/spdk/barrier.h 00:02:16.920 TEST_HEADER include/spdk/base64.h 00:02:16.920 TEST_HEADER include/spdk/bdev_zone.h 00:02:16.920 TEST_HEADER include/spdk/bit_array.h 00:02:16.920 TEST_HEADER include/spdk/bit_pool.h 00:02:16.920 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:17.188 TEST_HEADER include/spdk/blob_bdev.h 00:02:17.188 TEST_HEADER include/spdk/blob.h 00:02:17.188 TEST_HEADER include/spdk/config.h 00:02:17.188 TEST_HEADER include/spdk/conf.h 00:02:17.188 TEST_HEADER include/spdk/blobfs.h 00:02:17.188 TEST_HEADER include/spdk/crc16.h 00:02:17.188 TEST_HEADER include/spdk/cpuset.h 00:02:17.188 TEST_HEADER include/spdk/crc32.h 00:02:17.188 TEST_HEADER include/spdk/dif.h 00:02:17.188 TEST_HEADER include/spdk/dma.h 00:02:17.188 TEST_HEADER include/spdk/crc64.h 00:02:17.188 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:17.188 TEST_HEADER include/spdk/endian.h 00:02:17.188 TEST_HEADER include/spdk/env.h 00:02:17.188 TEST_HEADER include/spdk/env_dpdk.h 00:02:17.188 TEST_HEADER include/spdk/event.h 00:02:17.188 TEST_HEADER include/spdk/fd_group.h 00:02:17.188 TEST_HEADER include/spdk/fd.h 00:02:17.188 TEST_HEADER include/spdk/file.h 00:02:17.188 CC app/vhost/vhost.o 00:02:17.188 CC app/nvmf_tgt/nvmf_main.o 00:02:17.188 TEST_HEADER include/spdk/hexlify.h 00:02:17.188 TEST_HEADER include/spdk/ftl.h 00:02:17.188 TEST_HEADER include/spdk/histogram_data.h 00:02:17.188 TEST_HEADER include/spdk/gpt_spec.h 00:02:17.188 TEST_HEADER include/spdk/idxd.h 00:02:17.188 TEST_HEADER include/spdk/idxd_spec.h 00:02:17.188 TEST_HEADER include/spdk/init.h 00:02:17.188 TEST_HEADER include/spdk/ioat.h 00:02:17.188 TEST_HEADER include/spdk/ioat_spec.h 00:02:17.188 TEST_HEADER include/spdk/iscsi_spec.h 00:02:17.188 TEST_HEADER include/spdk/jsonrpc.h 00:02:17.188 TEST_HEADER include/spdk/json.h 00:02:17.188 TEST_HEADER include/spdk/keyring.h 00:02:17.188 TEST_HEADER include/spdk/keyring_module.h 00:02:17.188 TEST_HEADER include/spdk/likely.h 00:02:17.188 TEST_HEADER include/spdk/log.h 00:02:17.188 CC app/spdk_dd/spdk_dd.o 00:02:17.189 TEST_HEADER include/spdk/lvol.h 00:02:17.189 TEST_HEADER include/spdk/mmio.h 00:02:17.189 TEST_HEADER include/spdk/nbd.h 00:02:17.189 TEST_HEADER include/spdk/memory.h 00:02:17.189 TEST_HEADER include/spdk/nvme.h 00:02:17.189 TEST_HEADER include/spdk/notify.h 00:02:17.189 TEST_HEADER include/spdk/nvme_intel.h 00:02:17.189 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:17.189 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:17.189 CC app/iscsi_tgt/iscsi_tgt.o 00:02:17.189 TEST_HEADER include/spdk/nvme_spec.h 00:02:17.189 TEST_HEADER include/spdk/nvme_zns.h 00:02:17.189 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:17.189 TEST_HEADER include/spdk/nvmf_spec.h 00:02:17.189 TEST_HEADER include/spdk/nvmf.h 00:02:17.189 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:17.189 TEST_HEADER include/spdk/nvmf_transport.h 00:02:17.189 TEST_HEADER include/spdk/opal_spec.h 00:02:17.189 TEST_HEADER include/spdk/opal.h 00:02:17.189 TEST_HEADER include/spdk/pipe.h 00:02:17.189 TEST_HEADER include/spdk/pci_ids.h 00:02:17.189 TEST_HEADER include/spdk/queue.h 00:02:17.189 TEST_HEADER include/spdk/rpc.h 00:02:17.189 TEST_HEADER include/spdk/reduce.h 00:02:17.189 TEST_HEADER include/spdk/scheduler.h 00:02:17.189 TEST_HEADER include/spdk/scsi_spec.h 00:02:17.189 TEST_HEADER include/spdk/scsi.h 00:02:17.189 TEST_HEADER include/spdk/stdinc.h 00:02:17.189 TEST_HEADER include/spdk/sock.h 00:02:17.189 TEST_HEADER include/spdk/string.h 00:02:17.189 TEST_HEADER include/spdk/trace_parser.h 00:02:17.189 CC app/spdk_tgt/spdk_tgt.o 00:02:17.189 TEST_HEADER include/spdk/trace.h 00:02:17.189 TEST_HEADER include/spdk/thread.h 00:02:17.189 TEST_HEADER include/spdk/ublk.h 00:02:17.189 TEST_HEADER include/spdk/util.h 00:02:17.189 TEST_HEADER include/spdk/tree.h 00:02:17.189 TEST_HEADER include/spdk/uuid.h 00:02:17.189 TEST_HEADER include/spdk/version.h 00:02:17.189 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:17.189 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:17.189 TEST_HEADER include/spdk/vhost.h 00:02:17.189 TEST_HEADER include/spdk/vmd.h 00:02:17.189 CXX test/cpp_headers/accel.o 00:02:17.189 TEST_HEADER include/spdk/xor.h 00:02:17.189 CXX test/cpp_headers/accel_module.o 00:02:17.189 TEST_HEADER include/spdk/zipf.h 00:02:17.189 CXX test/cpp_headers/assert.o 00:02:17.189 CXX test/cpp_headers/barrier.o 00:02:17.189 CXX test/cpp_headers/base64.o 00:02:17.189 CXX test/cpp_headers/bdev.o 00:02:17.189 CXX test/cpp_headers/bdev_module.o 00:02:17.189 CXX test/cpp_headers/bit_array.o 00:02:17.189 CXX test/cpp_headers/bdev_zone.o 00:02:17.189 CXX test/cpp_headers/blobfs_bdev.o 00:02:17.189 CXX test/cpp_headers/blob_bdev.o 00:02:17.189 CXX test/cpp_headers/bit_pool.o 00:02:17.189 CXX test/cpp_headers/blob.o 00:02:17.189 CXX test/cpp_headers/blobfs.o 00:02:17.189 CXX test/cpp_headers/conf.o 00:02:17.189 CXX test/cpp_headers/config.o 00:02:17.189 CXX test/cpp_headers/cpuset.o 00:02:17.189 CXX test/cpp_headers/crc32.o 00:02:17.189 CXX test/cpp_headers/crc16.o 00:02:17.189 CXX test/cpp_headers/crc64.o 00:02:17.189 CXX test/cpp_headers/dif.o 00:02:17.189 CXX test/cpp_headers/env.o 00:02:17.189 CXX test/cpp_headers/dma.o 00:02:17.189 CXX test/cpp_headers/env_dpdk.o 00:02:17.189 CXX test/cpp_headers/endian.o 00:02:17.189 CXX test/cpp_headers/fd_group.o 00:02:17.189 CXX test/cpp_headers/file.o 00:02:17.189 CXX test/cpp_headers/event.o 00:02:17.189 CXX test/cpp_headers/fd.o 00:02:17.189 CXX test/cpp_headers/ftl.o 00:02:17.189 CXX test/cpp_headers/gpt_spec.o 00:02:17.189 CXX test/cpp_headers/idxd.o 00:02:17.189 CXX test/cpp_headers/hexlify.o 00:02:17.189 CXX test/cpp_headers/histogram_data.o 00:02:17.189 CXX test/cpp_headers/init.o 00:02:17.189 CXX test/cpp_headers/idxd_spec.o 00:02:17.189 CXX test/cpp_headers/ioat.o 00:02:17.189 CXX test/cpp_headers/ioat_spec.o 00:02:17.189 CXX test/cpp_headers/iscsi_spec.o 00:02:17.189 CXX test/cpp_headers/json.o 00:02:17.189 CXX test/cpp_headers/keyring.o 00:02:17.189 CXX test/cpp_headers/jsonrpc.o 00:02:17.189 CXX test/cpp_headers/keyring_module.o 00:02:17.189 CXX test/cpp_headers/lvol.o 00:02:17.189 CXX test/cpp_headers/likely.o 00:02:17.189 CXX test/cpp_headers/log.o 00:02:17.189 CXX test/cpp_headers/memory.o 00:02:17.189 CXX test/cpp_headers/mmio.o 00:02:17.189 CXX test/cpp_headers/nbd.o 00:02:17.189 CXX test/cpp_headers/nvme.o 00:02:17.189 CXX test/cpp_headers/notify.o 00:02:17.189 CXX test/cpp_headers/nvme_intel.o 00:02:17.189 CXX test/cpp_headers/nvme_ocssd.o 00:02:17.189 CXX test/cpp_headers/nvme_spec.o 00:02:17.189 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:17.189 CXX test/cpp_headers/nvmf_cmd.o 00:02:17.189 CXX test/cpp_headers/nvme_zns.o 00:02:17.189 CXX test/cpp_headers/nvmf.o 00:02:17.189 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:17.189 CXX test/cpp_headers/opal.o 00:02:17.189 CXX test/cpp_headers/nvmf_spec.o 00:02:17.189 CXX test/cpp_headers/nvmf_transport.o 00:02:17.189 CXX test/cpp_headers/pci_ids.o 00:02:17.189 CXX test/cpp_headers/opal_spec.o 00:02:17.189 CXX test/cpp_headers/pipe.o 00:02:17.189 CXX test/cpp_headers/reduce.o 00:02:17.189 CXX test/cpp_headers/queue.o 00:02:17.189 CXX test/cpp_headers/rpc.o 00:02:17.189 CXX test/cpp_headers/scheduler.o 00:02:17.460 CXX test/cpp_headers/scsi.o 00:02:17.460 CC examples/util/zipf/zipf.o 00:02:17.460 CC examples/nvme/arbitration/arbitration.o 00:02:17.460 CC examples/nvme/hello_world/hello_world.o 00:02:17.460 CC examples/blob/cli/blobcli.o 00:02:17.460 CC examples/nvme/abort/abort.o 00:02:17.460 CC examples/ioat/verify/verify.o 00:02:17.460 CC examples/sock/hello_world/hello_sock.o 00:02:17.460 CC examples/nvme/reconnect/reconnect.o 00:02:17.460 CC examples/vmd/led/led.o 00:02:17.460 CC test/app/stub/stub.o 00:02:17.460 CC test/nvme/reset/reset.o 00:02:17.460 CC test/nvme/e2edp/nvme_dp.o 00:02:17.460 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:17.460 CC test/nvme/sgl/sgl.o 00:02:17.460 CC examples/vmd/lsvmd/lsvmd.o 00:02:17.460 CC test/app/jsoncat/jsoncat.o 00:02:17.460 CC examples/nvme/hotplug/hotplug.o 00:02:17.460 CC examples/ioat/perf/perf.o 00:02:17.460 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:17.460 CXX test/cpp_headers/scsi_spec.o 00:02:17.460 CC examples/idxd/perf/perf.o 00:02:17.460 CC test/nvme/reserve/reserve.o 00:02:17.460 CC test/event/reactor/reactor.o 00:02:17.460 CC test/nvme/aer/aer.o 00:02:17.460 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:17.460 CC test/nvme/boot_partition/boot_partition.o 00:02:17.460 CC app/fio/nvme/fio_plugin.o 00:02:17.460 CC test/app/histogram_perf/histogram_perf.o 00:02:17.460 CC test/env/vtophys/vtophys.o 00:02:17.460 CC test/nvme/connect_stress/connect_stress.o 00:02:17.460 CC test/nvme/cuse/cuse.o 00:02:17.460 CC test/nvme/overhead/overhead.o 00:02:17.460 CC test/nvme/startup/startup.o 00:02:17.460 CC test/event/event_perf/event_perf.o 00:02:17.460 CC examples/accel/perf/accel_perf.o 00:02:17.460 CC test/nvme/compliance/nvme_compliance.o 00:02:17.460 CC test/bdev/bdevio/bdevio.o 00:02:17.460 CC examples/blob/hello_world/hello_blob.o 00:02:17.460 CC test/thread/poller_perf/poller_perf.o 00:02:17.460 CC test/nvme/err_injection/err_injection.o 00:02:17.460 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:17.460 CC test/nvme/fused_ordering/fused_ordering.o 00:02:17.460 CC test/nvme/fdp/fdp.o 00:02:17.460 CC test/nvme/simple_copy/simple_copy.o 00:02:17.460 CC test/event/reactor_perf/reactor_perf.o 00:02:17.460 CC examples/bdev/bdevperf/bdevperf.o 00:02:17.460 CC examples/nvmf/nvmf/nvmf.o 00:02:17.460 CC examples/bdev/hello_world/hello_bdev.o 00:02:17.460 CC test/env/pci/pci_ut.o 00:02:17.460 CC test/blobfs/mkfs/mkfs.o 00:02:17.460 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:17.460 CC test/env/memory/memory_ut.o 00:02:17.460 LINK spdk_lspci 00:02:17.460 CC app/fio/bdev/fio_plugin.o 00:02:17.460 CC test/event/scheduler/scheduler.o 00:02:17.460 CC test/accel/dif/dif.o 00:02:17.460 CC test/dma/test_dma/test_dma.o 00:02:17.460 CC test/app/bdev_svc/bdev_svc.o 00:02:17.460 CC test/event/app_repeat/app_repeat.o 00:02:17.460 CC examples/thread/thread/thread_ex.o 00:02:17.732 LINK rpc_client_test 00:02:17.732 LINK spdk_nvme_discover 00:02:17.732 LINK interrupt_tgt 00:02:17.732 LINK spdk_trace_record 00:02:17.732 LINK nvmf_tgt 00:02:17.732 LINK vhost 00:02:17.994 CC test/lvol/esnap/esnap.o 00:02:17.994 CC test/env/mem_callbacks/mem_callbacks.o 00:02:17.994 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:17.994 LINK iscsi_tgt 00:02:17.994 LINK led 00:02:17.994 LINK event_perf 00:02:17.994 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:17.994 LINK lsvmd 00:02:17.994 LINK zipf 00:02:17.994 LINK pmr_persistence 00:02:17.994 LINK reactor_perf 00:02:17.994 CXX test/cpp_headers/sock.o 00:02:17.994 LINK reactor 00:02:17.994 LINK jsoncat 00:02:18.256 LINK stub 00:02:18.256 CXX test/cpp_headers/stdinc.o 00:02:18.256 CXX test/cpp_headers/string.o 00:02:18.256 LINK reserve 00:02:18.256 CXX test/cpp_headers/thread.o 00:02:18.256 CXX test/cpp_headers/trace.o 00:02:18.256 LINK vtophys 00:02:18.256 CXX test/cpp_headers/trace_parser.o 00:02:18.256 LINK spdk_tgt 00:02:18.256 LINK boot_partition 00:02:18.256 LINK histogram_perf 00:02:18.256 LINK poller_perf 00:02:18.256 CXX test/cpp_headers/tree.o 00:02:18.256 CXX test/cpp_headers/ublk.o 00:02:18.256 LINK hello_sock 00:02:18.256 LINK verify 00:02:18.256 LINK err_injection 00:02:18.256 CXX test/cpp_headers/util.o 00:02:18.256 LINK env_dpdk_post_init 00:02:18.256 CXX test/cpp_headers/version.o 00:02:18.256 CXX test/cpp_headers/uuid.o 00:02:18.256 CXX test/cpp_headers/vfio_user_pci.o 00:02:18.256 CXX test/cpp_headers/vfio_user_spec.o 00:02:18.256 LINK startup 00:02:18.256 LINK connect_stress 00:02:18.256 CXX test/cpp_headers/vhost.o 00:02:18.256 LINK cmb_copy 00:02:18.256 CXX test/cpp_headers/vmd.o 00:02:18.256 CXX test/cpp_headers/xor.o 00:02:18.256 CXX test/cpp_headers/zipf.o 00:02:18.256 LINK app_repeat 00:02:18.256 LINK hello_blob 00:02:18.256 LINK hello_world 00:02:18.256 LINK doorbell_aers 00:02:18.256 LINK fused_ordering 00:02:18.256 LINK simple_copy 00:02:18.256 LINK ioat_perf 00:02:18.256 LINK bdev_svc 00:02:18.256 LINK hotplug 00:02:18.256 LINK spdk_dd 00:02:18.256 LINK scheduler 00:02:18.256 LINK hello_bdev 00:02:18.256 LINK nvme_dp 00:02:18.256 LINK mkfs 00:02:18.256 LINK sgl 00:02:18.256 LINK reset 00:02:18.256 LINK aer 00:02:18.256 LINK spdk_trace 00:02:18.256 LINK thread 00:02:18.256 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:18.256 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:18.256 LINK nvmf 00:02:18.256 LINK overhead 00:02:18.256 LINK idxd_perf 00:02:18.256 LINK reconnect 00:02:18.516 LINK abort 00:02:18.516 LINK bdevio 00:02:18.516 LINK arbitration 00:02:18.516 LINK nvme_compliance 00:02:18.516 LINK pci_ut 00:02:18.516 LINK fdp 00:02:18.516 LINK test_dma 00:02:18.516 LINK blobcli 00:02:18.516 LINK dif 00:02:18.516 LINK accel_perf 00:02:18.516 LINK nvme_manage 00:02:18.516 LINK spdk_nvme 00:02:18.516 LINK spdk_nvme_perf 00:02:18.516 LINK spdk_nvme_identify 00:02:18.778 LINK spdk_bdev 00:02:18.778 LINK nvme_fuzz 00:02:18.778 LINK spdk_top 00:02:18.778 LINK vhost_fuzz 00:02:18.778 LINK bdevperf 00:02:18.778 LINK mem_callbacks 00:02:19.039 LINK memory_ut 00:02:19.039 LINK cuse 00:02:19.985 LINK iscsi_fuzz 00:02:22.531 LINK esnap 00:02:22.793 00:02:22.793 real 0m51.244s 00:02:22.793 user 6m51.584s 00:02:22.793 sys 5m16.590s 00:02:22.793 09:16:54 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:22.793 09:16:54 make -- common/autotest_common.sh@10 -- $ set +x 00:02:22.793 ************************************ 00:02:22.793 END TEST make 00:02:22.793 ************************************ 00:02:22.793 09:16:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:22.793 09:16:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:22.793 09:16:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:22.793 09:16:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.793 09:16:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:22.793 09:16:54 -- pm/common@44 -- $ pid=766095 00:02:22.793 09:16:54 -- pm/common@50 -- $ kill -TERM 766095 00:02:22.793 09:16:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.793 09:16:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:22.793 09:16:54 -- pm/common@44 -- $ pid=766096 00:02:22.793 09:16:54 -- pm/common@50 -- $ kill -TERM 766096 00:02:22.793 09:16:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.793 09:16:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:22.793 09:16:54 -- pm/common@44 -- $ pid=766098 00:02:22.793 09:16:54 -- pm/common@50 -- $ kill -TERM 766098 00:02:22.793 09:16:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.793 09:16:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:22.793 09:16:54 -- pm/common@44 -- $ pid=766121 00:02:22.793 09:16:54 -- pm/common@50 -- $ sudo -E kill -TERM 766121 00:02:23.054 09:16:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:23.054 09:16:54 -- nvmf/common.sh@7 -- # uname -s 00:02:23.054 09:16:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:23.054 09:16:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:23.054 09:16:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:23.054 09:16:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:23.054 09:16:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:23.054 09:16:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:23.054 09:16:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:23.054 09:16:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:23.054 09:16:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:23.054 09:16:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:23.054 09:16:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:23.054 09:16:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:23.054 09:16:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:23.054 09:16:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:23.054 09:16:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:23.054 09:16:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:23.054 09:16:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:23.054 09:16:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:23.054 09:16:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.054 09:16:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.054 09:16:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.054 09:16:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.054 09:16:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.054 09:16:54 -- paths/export.sh@5 -- # export PATH 00:02:23.054 09:16:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.054 09:16:54 -- nvmf/common.sh@47 -- # : 0 00:02:23.054 09:16:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:23.054 09:16:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:23.054 09:16:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:23.054 09:16:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:23.054 09:16:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:23.054 09:16:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:23.054 09:16:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:23.054 09:16:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:23.054 09:16:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:23.054 09:16:54 -- spdk/autotest.sh@32 -- # uname -s 00:02:23.054 09:16:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:23.054 09:16:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:23.054 09:16:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:23.054 09:16:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:23.054 09:16:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:23.054 09:16:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:23.054 09:16:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:23.054 09:16:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:23.054 09:16:54 -- spdk/autotest.sh@48 -- # udevadm_pid=829337 00:02:23.054 09:16:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:23.054 09:16:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:23.054 09:16:54 -- pm/common@17 -- # local monitor 00:02:23.054 09:16:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.054 09:16:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.055 09:16:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.055 09:16:54 -- pm/common@21 -- # date +%s 00:02:23.055 09:16:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.055 09:16:54 -- pm/common@21 -- # date +%s 00:02:23.055 09:16:54 -- pm/common@25 -- # sleep 1 00:02:23.055 09:16:54 -- pm/common@21 -- # date +%s 00:02:23.055 09:16:54 -- pm/common@21 -- # date +%s 00:02:23.055 09:16:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718090214 00:02:23.055 09:16:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718090214 00:02:23.055 09:16:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718090214 00:02:23.055 09:16:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718090214 00:02:23.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718090214_collect-vmstat.pm.log 00:02:23.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718090214_collect-cpu-load.pm.log 00:02:23.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718090214_collect-cpu-temp.pm.log 00:02:23.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718090214_collect-bmc-pm.bmc.pm.log 00:02:23.999 09:16:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:23.999 09:16:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:23.999 09:16:55 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:23.999 09:16:55 -- common/autotest_common.sh@10 -- # set +x 00:02:23.999 09:16:55 -- spdk/autotest.sh@59 -- # create_test_list 00:02:23.999 09:16:55 -- common/autotest_common.sh@747 -- # xtrace_disable 00:02:23.999 09:16:55 -- common/autotest_common.sh@10 -- # set +x 00:02:23.999 09:16:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:23.999 09:16:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:24.260 09:16:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:24.260 09:16:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:24.260 09:16:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:24.260 09:16:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:24.260 09:16:55 -- common/autotest_common.sh@1454 -- # uname 00:02:24.260 09:16:55 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:02:24.260 09:16:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:24.260 09:16:55 -- common/autotest_common.sh@1474 -- # uname 00:02:24.260 09:16:55 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:02:24.260 09:16:55 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:24.260 09:16:55 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:24.260 09:16:55 -- spdk/autotest.sh@72 -- # hash lcov 00:02:24.260 09:16:55 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:24.260 09:16:55 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:24.260 --rc lcov_branch_coverage=1 00:02:24.260 --rc lcov_function_coverage=1 00:02:24.260 --rc genhtml_branch_coverage=1 00:02:24.260 --rc genhtml_function_coverage=1 00:02:24.260 --rc genhtml_legend=1 00:02:24.260 --rc geninfo_all_blocks=1 00:02:24.260 ' 00:02:24.260 09:16:55 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:24.260 --rc lcov_branch_coverage=1 00:02:24.260 --rc lcov_function_coverage=1 00:02:24.260 --rc genhtml_branch_coverage=1 00:02:24.260 --rc genhtml_function_coverage=1 00:02:24.260 --rc genhtml_legend=1 00:02:24.260 --rc geninfo_all_blocks=1 00:02:24.260 ' 00:02:24.260 09:16:55 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:24.260 --rc lcov_branch_coverage=1 00:02:24.260 --rc lcov_function_coverage=1 00:02:24.260 --rc genhtml_branch_coverage=1 00:02:24.260 --rc genhtml_function_coverage=1 00:02:24.260 --rc genhtml_legend=1 00:02:24.260 --rc geninfo_all_blocks=1 00:02:24.260 --no-external' 00:02:24.260 09:16:55 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:24.260 --rc lcov_branch_coverage=1 00:02:24.260 --rc lcov_function_coverage=1 00:02:24.260 --rc genhtml_branch_coverage=1 00:02:24.260 --rc genhtml_function_coverage=1 00:02:24.260 --rc genhtml_legend=1 00:02:24.260 --rc geninfo_all_blocks=1 00:02:24.260 --no-external' 00:02:24.260 09:16:55 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:24.260 lcov: LCOV version 1.14 00:02:24.260 09:16:55 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:36.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:36.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:51.476 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:51.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:51.477 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:51.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:51.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:51.478 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:51.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:51.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:51.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:51.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:51.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:51.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:51.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:51.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:51.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:51.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:51.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:51.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:51.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:51.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:51.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:51.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:53.654 09:17:25 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:53.654 09:17:25 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:53.654 09:17:25 -- common/autotest_common.sh@10 -- # set +x 00:02:53.654 09:17:25 -- spdk/autotest.sh@91 -- # rm -f 00:02:53.654 09:17:25 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:56.955 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:56.955 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:56.955 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:56.955 09:17:28 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:56.955 09:17:28 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:02:56.955 09:17:28 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:02:56.955 09:17:28 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:02:56.955 09:17:28 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:56.955 09:17:28 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:02:56.955 09:17:28 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:02:56.955 09:17:28 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:56.955 09:17:28 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:56.955 09:17:28 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:56.955 09:17:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:56.955 09:17:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:56.955 09:17:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:56.955 09:17:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:56.955 09:17:28 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:56.955 No valid GPT data, bailing 00:02:56.955 09:17:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:56.955 09:17:28 -- scripts/common.sh@391 -- # pt= 00:02:56.955 09:17:28 -- scripts/common.sh@392 -- # return 1 00:02:56.955 09:17:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:56.955 1+0 records in 00:02:56.955 1+0 records out 00:02:56.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005178 s, 203 MB/s 00:02:56.955 09:17:28 -- spdk/autotest.sh@118 -- # sync 00:02:56.955 09:17:28 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:56.955 09:17:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:56.955 09:17:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:05.100 09:17:36 -- spdk/autotest.sh@124 -- # uname -s 00:03:05.100 09:17:36 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:05.100 09:17:36 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:05.100 09:17:36 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:05.100 09:17:36 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:05.100 09:17:36 -- common/autotest_common.sh@10 -- # set +x 00:03:05.100 ************************************ 00:03:05.100 START TEST setup.sh 00:03:05.100 ************************************ 00:03:05.100 09:17:36 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:05.362 * Looking for test storage... 00:03:05.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:05.362 09:17:36 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:05.362 09:17:36 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:05.362 09:17:36 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:05.362 09:17:36 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:05.362 09:17:36 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:05.362 09:17:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:05.362 ************************************ 00:03:05.362 START TEST acl 00:03:05.362 ************************************ 00:03:05.362 09:17:37 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:05.362 * Looking for test storage... 00:03:05.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:05.362 09:17:37 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:05.362 09:17:37 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:05.362 09:17:37 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:05.362 09:17:37 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:05.362 09:17:37 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:05.362 09:17:37 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:05.362 09:17:37 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:05.362 09:17:37 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:05.362 09:17:37 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:05.362 09:17:37 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:05.362 09:17:37 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:05.362 09:17:37 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:05.362 09:17:37 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:05.362 09:17:37 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:05.362 09:17:37 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.362 09:17:37 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.573 09:17:40 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:09.573 09:17:40 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:09.573 09:17:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.573 09:17:40 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:09.573 09:17:40 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.573 09:17:40 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:12.912 Hugepages 00:03:12.912 node hugesize free / total 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 00:03:12.912 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:12.912 09:17:44 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:12.912 09:17:44 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:12.912 09:17:44 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:12.913 09:17:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:12.913 ************************************ 00:03:12.913 START TEST denied 00:03:12.913 ************************************ 00:03:12.913 09:17:44 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:03:12.913 09:17:44 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:12.913 09:17:44 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:12.913 09:17:44 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:12.913 09:17:44 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.913 09:17:44 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:16.217 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:16.218 09:17:47 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:16.218 09:17:47 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:16.218 09:17:47 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:16.218 09:17:47 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:16.218 09:17:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:16.218 09:17:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:16.218 09:17:47 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:16.218 09:17:47 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:16.218 09:17:47 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.218 09:17:47 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.520 00:03:21.520 real 0m8.203s 00:03:21.520 user 0m2.756s 00:03:21.520 sys 0m4.689s 00:03:21.520 09:17:52 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:21.520 09:17:52 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:21.520 ************************************ 00:03:21.520 END TEST denied 00:03:21.520 ************************************ 00:03:21.520 09:17:52 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:21.520 09:17:52 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:21.520 09:17:52 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:21.520 09:17:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:21.520 ************************************ 00:03:21.520 START TEST allowed 00:03:21.520 ************************************ 00:03:21.520 09:17:52 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:03:21.520 09:17:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:21.520 09:17:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:21.520 09:17:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.520 09:17:52 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:21.520 09:17:52 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.807 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:26.807 09:17:57 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:26.807 09:17:57 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:26.807 09:17:57 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:26.807 09:17:57 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.807 09:17:57 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.109 00:03:30.109 real 0m9.107s 00:03:30.109 user 0m2.700s 00:03:30.109 sys 0m4.722s 00:03:30.109 09:18:01 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:30.109 09:18:01 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:30.109 ************************************ 00:03:30.109 END TEST allowed 00:03:30.109 ************************************ 00:03:30.109 00:03:30.109 real 0m24.887s 00:03:30.109 user 0m8.368s 00:03:30.109 sys 0m14.291s 00:03:30.109 09:18:01 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:30.109 09:18:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:30.109 ************************************ 00:03:30.109 END TEST acl 00:03:30.109 ************************************ 00:03:30.373 09:18:01 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:30.373 09:18:01 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:30.373 09:18:01 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:30.373 09:18:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:30.373 ************************************ 00:03:30.373 START TEST hugepages 00:03:30.373 ************************************ 00:03:30.373 09:18:01 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:30.373 * Looking for test storage... 00:03:30.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 103231036 kB' 'MemAvailable: 106492216 kB' 'Buffers: 2704 kB' 'Cached: 14349588 kB' 'SwapCached: 0 kB' 'Active: 11380644 kB' 'Inactive: 3516604 kB' 'Active(anon): 10965116 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548956 kB' 'Mapped: 193636 kB' 'Shmem: 10420160 kB' 'KReclaimable: 306756 kB' 'Slab: 1136756 kB' 'SReclaimable: 306756 kB' 'SUnreclaim: 830000 kB' 'KernelStack: 27488 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460868 kB' 'Committed_AS: 12462496 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235124 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.373 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:30.374 09:18:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:30.374 09:18:02 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:30.374 09:18:02 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:30.374 09:18:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.374 ************************************ 00:03:30.374 START TEST default_setup 00:03:30.374 ************************************ 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.375 09:18:02 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.677 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:33.939 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105409676 kB' 'MemAvailable: 108670856 kB' 'Buffers: 2704 kB' 'Cached: 14349948 kB' 'SwapCached: 0 kB' 'Active: 11393268 kB' 'Inactive: 3516604 kB' 'Active(anon): 10977740 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560636 kB' 'Mapped: 193956 kB' 'Shmem: 10420520 kB' 'KReclaimable: 306756 kB' 'Slab: 1135644 kB' 'SReclaimable: 306756 kB' 'SUnreclaim: 828888 kB' 'KernelStack: 27488 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12477876 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235204 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.939 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105416288 kB' 'MemAvailable: 108677452 kB' 'Buffers: 2704 kB' 'Cached: 14349952 kB' 'SwapCached: 0 kB' 'Active: 11394892 kB' 'Inactive: 3516604 kB' 'Active(anon): 10979364 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562292 kB' 'Mapped: 193948 kB' 'Shmem: 10420524 kB' 'KReclaimable: 306724 kB' 'Slab: 1135044 kB' 'SReclaimable: 306724 kB' 'SUnreclaim: 828320 kB' 'KernelStack: 27504 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12495056 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235188 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.206 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.207 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105417840 kB' 'MemAvailable: 108679004 kB' 'Buffers: 2704 kB' 'Cached: 14349968 kB' 'SwapCached: 0 kB' 'Active: 11393824 kB' 'Inactive: 3516604 kB' 'Active(anon): 10978296 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561220 kB' 'Mapped: 193888 kB' 'Shmem: 10420540 kB' 'KReclaimable: 306724 kB' 'Slab: 1135132 kB' 'SReclaimable: 306724 kB' 'SUnreclaim: 828408 kB' 'KernelStack: 27504 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12477672 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235156 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.208 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.209 nr_hugepages=1024 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.209 resv_hugepages=0 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.209 surplus_hugepages=0 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.209 anon_hugepages=0 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105417956 kB' 'MemAvailable: 108679120 kB' 'Buffers: 2704 kB' 'Cached: 14350008 kB' 'SwapCached: 0 kB' 'Active: 11393036 kB' 'Inactive: 3516604 kB' 'Active(anon): 10977508 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560312 kB' 'Mapped: 193948 kB' 'Shmem: 10420580 kB' 'KReclaimable: 306724 kB' 'Slab: 1135120 kB' 'SReclaimable: 306724 kB' 'SUnreclaim: 828396 kB' 'KernelStack: 27488 kB' 'PageTables: 8788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12477696 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235156 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.209 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53207508 kB' 'MemUsed: 12451500 kB' 'SwapCached: 0 kB' 'Active: 4457748 kB' 'Inactive: 3323076 kB' 'Active(anon): 4315472 kB' 'Inactive(anon): 0 kB' 'Active(file): 142276 kB' 'Inactive(file): 3323076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7546804 kB' 'Mapped: 63828 kB' 'AnonPages: 237228 kB' 'Shmem: 4081452 kB' 'KernelStack: 12712 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148772 kB' 'Slab: 648776 kB' 'SReclaimable: 148772 kB' 'SUnreclaim: 500004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.210 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:34.211 node0=1024 expecting 1024 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:34.211 00:03:34.211 real 0m3.736s 00:03:34.211 user 0m1.395s 00:03:34.211 sys 0m2.341s 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:34.211 09:18:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:34.211 ************************************ 00:03:34.211 END TEST default_setup 00:03:34.211 ************************************ 00:03:34.211 09:18:05 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:34.211 09:18:05 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:34.211 09:18:05 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:34.211 09:18:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.211 ************************************ 00:03:34.211 START TEST per_node_1G_alloc 00:03:34.211 ************************************ 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.211 09:18:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:37.598 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:37.598 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105441804 kB' 'MemAvailable: 108702968 kB' 'Buffers: 2704 kB' 'Cached: 14350332 kB' 'SwapCached: 0 kB' 'Active: 11392392 kB' 'Inactive: 3516604 kB' 'Active(anon): 10976864 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559264 kB' 'Mapped: 192960 kB' 'Shmem: 10420904 kB' 'KReclaimable: 306724 kB' 'Slab: 1135744 kB' 'SReclaimable: 306724 kB' 'SUnreclaim: 829020 kB' 'KernelStack: 27520 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12466876 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235204 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.598 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.599 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105443172 kB' 'MemAvailable: 108704336 kB' 'Buffers: 2704 kB' 'Cached: 14350336 kB' 'SwapCached: 0 kB' 'Active: 11391716 kB' 'Inactive: 3516604 kB' 'Active(anon): 10976188 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558580 kB' 'Mapped: 192856 kB' 'Shmem: 10420908 kB' 'KReclaimable: 306724 kB' 'Slab: 1135744 kB' 'SReclaimable: 306724 kB' 'SUnreclaim: 829020 kB' 'KernelStack: 27472 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12464604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235140 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.600 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.601 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105446772 kB' 'MemAvailable: 108707936 kB' 'Buffers: 2704 kB' 'Cached: 14350364 kB' 'SwapCached: 0 kB' 'Active: 11392104 kB' 'Inactive: 3516604 kB' 'Active(anon): 10976576 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558980 kB' 'Mapped: 192856 kB' 'Shmem: 10420936 kB' 'KReclaimable: 306724 kB' 'Slab: 1135760 kB' 'SReclaimable: 306724 kB' 'SUnreclaim: 829036 kB' 'KernelStack: 27488 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12464268 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235060 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.602 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.603 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.870 nr_hugepages=1024 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.870 resv_hugepages=0 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.870 surplus_hugepages=0 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.870 anon_hugepages=0 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.870 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105447388 kB' 'MemAvailable: 108708552 kB' 'Buffers: 2704 kB' 'Cached: 14350396 kB' 'SwapCached: 0 kB' 'Active: 11391460 kB' 'Inactive: 3516604 kB' 'Active(anon): 10975932 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558228 kB' 'Mapped: 192856 kB' 'Shmem: 10420968 kB' 'KReclaimable: 306724 kB' 'Slab: 1135760 kB' 'SReclaimable: 306724 kB' 'SUnreclaim: 829036 kB' 'KernelStack: 27456 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12464304 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235076 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.871 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54267628 kB' 'MemUsed: 11391380 kB' 'SwapCached: 0 kB' 'Active: 4456612 kB' 'Inactive: 3323076 kB' 'Active(anon): 4314336 kB' 'Inactive(anon): 0 kB' 'Active(file): 142276 kB' 'Inactive(file): 3323076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7546816 kB' 'Mapped: 62984 kB' 'AnonPages: 236024 kB' 'Shmem: 4081464 kB' 'KernelStack: 12712 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148772 kB' 'Slab: 649100 kB' 'SReclaimable: 148772 kB' 'SUnreclaim: 500328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.873 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.874 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679828 kB' 'MemFree: 51179508 kB' 'MemUsed: 9500320 kB' 'SwapCached: 0 kB' 'Active: 6934760 kB' 'Inactive: 193528 kB' 'Active(anon): 6661508 kB' 'Inactive(anon): 0 kB' 'Active(file): 273252 kB' 'Inactive(file): 193528 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6806324 kB' 'Mapped: 129872 kB' 'AnonPages: 322048 kB' 'Shmem: 6339544 kB' 'KernelStack: 14728 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157952 kB' 'Slab: 486660 kB' 'SReclaimable: 157952 kB' 'SUnreclaim: 328708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.875 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:37.876 node0=512 expecting 512 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:37.876 node1=512 expecting 512 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:37.876 00:03:37.876 real 0m3.549s 00:03:37.876 user 0m1.469s 00:03:37.876 sys 0m2.145s 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:37.876 09:18:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:37.876 ************************************ 00:03:37.876 END TEST per_node_1G_alloc 00:03:37.876 ************************************ 00:03:37.876 09:18:09 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:37.876 09:18:09 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:37.876 09:18:09 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:37.876 09:18:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:37.876 ************************************ 00:03:37.876 START TEST even_2G_alloc 00:03:37.876 ************************************ 00:03:37.876 09:18:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:03:37.876 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.877 09:18:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.200 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:41.200 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:41.200 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:41.200 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:41.200 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:41.200 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:41.200 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:41.200 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:41.201 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:41.201 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:41.201 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:41.201 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:41.201 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:41.201 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:41.201 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:41.201 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:41.201 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105435660 kB' 'MemAvailable: 108696808 kB' 'Buffers: 2704 kB' 'Cached: 14350544 kB' 'SwapCached: 0 kB' 'Active: 11388320 kB' 'Inactive: 3516604 kB' 'Active(anon): 10972792 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554432 kB' 'Mapped: 192060 kB' 'Shmem: 10421116 kB' 'KReclaimable: 306692 kB' 'Slab: 1135696 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 829004 kB' 'KernelStack: 27360 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12456080 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235104 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.201 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105435356 kB' 'MemAvailable: 108696504 kB' 'Buffers: 2704 kB' 'Cached: 14350548 kB' 'SwapCached: 0 kB' 'Active: 11387960 kB' 'Inactive: 3516604 kB' 'Active(anon): 10972432 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554624 kB' 'Mapped: 191956 kB' 'Shmem: 10421120 kB' 'KReclaimable: 306692 kB' 'Slab: 1135680 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 828988 kB' 'KernelStack: 27344 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12456100 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235088 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.202 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.203 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105435400 kB' 'MemAvailable: 108696548 kB' 'Buffers: 2704 kB' 'Cached: 14350564 kB' 'SwapCached: 0 kB' 'Active: 11388012 kB' 'Inactive: 3516604 kB' 'Active(anon): 10972484 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554624 kB' 'Mapped: 191956 kB' 'Shmem: 10421136 kB' 'KReclaimable: 306692 kB' 'Slab: 1135680 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 828988 kB' 'KernelStack: 27344 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12456120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235088 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.204 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.205 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.206 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:41.207 nr_hugepages=1024 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.207 resv_hugepages=0 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.207 surplus_hugepages=0 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.207 anon_hugepages=0 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105435580 kB' 'MemAvailable: 108696728 kB' 'Buffers: 2704 kB' 'Cached: 14350592 kB' 'SwapCached: 0 kB' 'Active: 11388040 kB' 'Inactive: 3516604 kB' 'Active(anon): 10972512 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554632 kB' 'Mapped: 191956 kB' 'Shmem: 10421164 kB' 'KReclaimable: 306692 kB' 'Slab: 1135680 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 828988 kB' 'KernelStack: 27344 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12456144 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235088 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.207 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.208 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.209 09:18:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54236888 kB' 'MemUsed: 11422120 kB' 'SwapCached: 0 kB' 'Active: 4452904 kB' 'Inactive: 3323076 kB' 'Active(anon): 4310628 kB' 'Inactive(anon): 0 kB' 'Active(file): 142276 kB' 'Inactive(file): 3323076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7546816 kB' 'Mapped: 62980 kB' 'AnonPages: 232288 kB' 'Shmem: 4081464 kB' 'KernelStack: 12664 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148740 kB' 'Slab: 649232 kB' 'SReclaimable: 148740 kB' 'SUnreclaim: 500492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.209 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679828 kB' 'MemFree: 51198664 kB' 'MemUsed: 9481164 kB' 'SwapCached: 0 kB' 'Active: 6935220 kB' 'Inactive: 193528 kB' 'Active(anon): 6661968 kB' 'Inactive(anon): 0 kB' 'Active(file): 273252 kB' 'Inactive(file): 193528 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6806516 kB' 'Mapped: 128976 kB' 'AnonPages: 322384 kB' 'Shmem: 6339736 kB' 'KernelStack: 14680 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157952 kB' 'Slab: 486448 kB' 'SReclaimable: 157952 kB' 'SUnreclaim: 328496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.475 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.476 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.476 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:41.476 node0=512 expecting 512 00:03:41.476 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.476 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.476 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.476 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:41.476 node1=512 expecting 512 00:03:41.476 09:18:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:41.476 00:03:41.476 real 0m3.465s 00:03:41.476 user 0m1.321s 00:03:41.476 sys 0m2.190s 00:03:41.476 09:18:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:41.476 09:18:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:41.476 ************************************ 00:03:41.476 END TEST even_2G_alloc 00:03:41.476 ************************************ 00:03:41.476 09:18:13 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:41.476 09:18:13 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:41.476 09:18:13 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:41.476 09:18:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:41.476 ************************************ 00:03:41.476 START TEST odd_alloc 00:03:41.476 ************************************ 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.476 09:18:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.788 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:44.788 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105457580 kB' 'MemAvailable: 108718728 kB' 'Buffers: 2704 kB' 'Cached: 14350720 kB' 'SwapCached: 0 kB' 'Active: 11389512 kB' 'Inactive: 3516604 kB' 'Active(anon): 10973984 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556008 kB' 'Mapped: 191992 kB' 'Shmem: 10421292 kB' 'KReclaimable: 306692 kB' 'Slab: 1136152 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 829460 kB' 'KernelStack: 27376 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508420 kB' 'Committed_AS: 12457224 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235088 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.788 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.789 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.790 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.056 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.056 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.056 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.056 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.056 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.056 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.056 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.056 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.056 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105456648 kB' 'MemAvailable: 108717796 kB' 'Buffers: 2704 kB' 'Cached: 14350724 kB' 'SwapCached: 0 kB' 'Active: 11390392 kB' 'Inactive: 3516604 kB' 'Active(anon): 10974864 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556872 kB' 'Mapped: 191972 kB' 'Shmem: 10421296 kB' 'KReclaimable: 306692 kB' 'Slab: 1136136 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 829444 kB' 'KernelStack: 27344 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508420 kB' 'Committed_AS: 12476516 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235040 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.057 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.058 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105457680 kB' 'MemAvailable: 108718828 kB' 'Buffers: 2704 kB' 'Cached: 14350724 kB' 'SwapCached: 0 kB' 'Active: 11389420 kB' 'Inactive: 3516604 kB' 'Active(anon): 10973892 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555920 kB' 'Mapped: 192476 kB' 'Shmem: 10421296 kB' 'KReclaimable: 306692 kB' 'Slab: 1136204 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 829512 kB' 'KernelStack: 27344 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508420 kB' 'Committed_AS: 12458384 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235008 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.059 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:45.060 nr_hugepages=1025 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.060 resv_hugepages=0 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.060 surplus_hugepages=0 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.060 anon_hugepages=0 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.060 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105459036 kB' 'MemAvailable: 108720184 kB' 'Buffers: 2704 kB' 'Cached: 14350724 kB' 'SwapCached: 0 kB' 'Active: 11391308 kB' 'Inactive: 3516604 kB' 'Active(anon): 10975780 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557844 kB' 'Mapped: 192476 kB' 'Shmem: 10421296 kB' 'KReclaimable: 306692 kB' 'Slab: 1136204 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 829512 kB' 'KernelStack: 27312 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508420 kB' 'Committed_AS: 12460256 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235008 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.061 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.062 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54239340 kB' 'MemUsed: 11419668 kB' 'SwapCached: 0 kB' 'Active: 4458604 kB' 'Inactive: 3323076 kB' 'Active(anon): 4316328 kB' 'Inactive(anon): 0 kB' 'Active(file): 142276 kB' 'Inactive(file): 3323076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7546868 kB' 'Mapped: 62980 kB' 'AnonPages: 237996 kB' 'Shmem: 4081516 kB' 'KernelStack: 12616 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148740 kB' 'Slab: 649540 kB' 'SReclaimable: 148740 kB' 'SUnreclaim: 500800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.063 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679828 kB' 'MemFree: 51219388 kB' 'MemUsed: 9460440 kB' 'SwapCached: 0 kB' 'Active: 6935084 kB' 'Inactive: 193528 kB' 'Active(anon): 6661832 kB' 'Inactive(anon): 0 kB' 'Active(file): 273252 kB' 'Inactive(file): 193528 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6806636 kB' 'Mapped: 129708 kB' 'AnonPages: 322152 kB' 'Shmem: 6339856 kB' 'KernelStack: 14712 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157952 kB' 'Slab: 486664 kB' 'SReclaimable: 157952 kB' 'SUnreclaim: 328712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.064 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.065 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:45.066 node0=512 expecting 513 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:45.066 node1=513 expecting 512 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:45.066 00:03:45.066 real 0m3.580s 00:03:45.066 user 0m1.405s 00:03:45.066 sys 0m2.239s 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:45.066 09:18:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.066 ************************************ 00:03:45.066 END TEST odd_alloc 00:03:45.066 ************************************ 00:03:45.066 09:18:16 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:45.066 09:18:16 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:45.066 09:18:16 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:45.066 09:18:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.066 ************************************ 00:03:45.066 START TEST custom_alloc 00:03:45.066 ************************************ 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.066 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.068 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.068 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.068 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.068 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.068 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:45.068 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.068 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:45.068 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.068 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:45.068 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:45.068 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:45.068 09:18:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:45.068 09:18:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.068 09:18:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.376 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:48.376 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.376 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 104430024 kB' 'MemAvailable: 107691172 kB' 'Buffers: 2704 kB' 'Cached: 14350892 kB' 'SwapCached: 0 kB' 'Active: 11391732 kB' 'Inactive: 3516604 kB' 'Active(anon): 10976204 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557996 kB' 'Mapped: 192008 kB' 'Shmem: 10421464 kB' 'KReclaimable: 306692 kB' 'Slab: 1137124 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 830432 kB' 'KernelStack: 27376 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985156 kB' 'Committed_AS: 12458168 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235152 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.377 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 104431636 kB' 'MemAvailable: 107692784 kB' 'Buffers: 2704 kB' 'Cached: 14350896 kB' 'SwapCached: 0 kB' 'Active: 11391052 kB' 'Inactive: 3516604 kB' 'Active(anon): 10975524 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557824 kB' 'Mapped: 191992 kB' 'Shmem: 10421468 kB' 'KReclaimable: 306692 kB' 'Slab: 1137120 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 830428 kB' 'KernelStack: 27344 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985156 kB' 'Committed_AS: 12458188 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235104 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.378 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.379 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 104431356 kB' 'MemAvailable: 107692504 kB' 'Buffers: 2704 kB' 'Cached: 14350896 kB' 'SwapCached: 0 kB' 'Active: 11390956 kB' 'Inactive: 3516604 kB' 'Active(anon): 10975428 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557720 kB' 'Mapped: 191992 kB' 'Shmem: 10421468 kB' 'KReclaimable: 306692 kB' 'Slab: 1137120 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 830428 kB' 'KernelStack: 27344 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985156 kB' 'Committed_AS: 12458208 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235104 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:48.646 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.647 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:48.648 nr_hugepages=1536 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.648 resv_hugepages=0 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.648 surplus_hugepages=0 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.648 anon_hugepages=0 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 104431268 kB' 'MemAvailable: 107692416 kB' 'Buffers: 2704 kB' 'Cached: 14350932 kB' 'SwapCached: 0 kB' 'Active: 11392544 kB' 'Inactive: 3516604 kB' 'Active(anon): 10977016 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559332 kB' 'Mapped: 191992 kB' 'Shmem: 10421504 kB' 'KReclaimable: 306692 kB' 'Slab: 1137120 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 830428 kB' 'KernelStack: 27360 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985156 kB' 'Committed_AS: 12478040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235104 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.648 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.649 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.650 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54272376 kB' 'MemUsed: 11386632 kB' 'SwapCached: 0 kB' 'Active: 4453328 kB' 'Inactive: 3323076 kB' 'Active(anon): 4311052 kB' 'Inactive(anon): 0 kB' 'Active(file): 142276 kB' 'Inactive(file): 3323076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7546904 kB' 'Mapped: 62980 kB' 'AnonPages: 232848 kB' 'Shmem: 4081552 kB' 'KernelStack: 12664 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148740 kB' 'Slab: 650012 kB' 'SReclaimable: 148740 kB' 'SUnreclaim: 501272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.651 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679828 kB' 'MemFree: 50159720 kB' 'MemUsed: 10520108 kB' 'SwapCached: 0 kB' 'Active: 6937784 kB' 'Inactive: 193528 kB' 'Active(anon): 6664532 kB' 'Inactive(anon): 0 kB' 'Active(file): 273252 kB' 'Inactive(file): 193528 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6806780 kB' 'Mapped: 129012 kB' 'AnonPages: 324764 kB' 'Shmem: 6340000 kB' 'KernelStack: 14664 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157952 kB' 'Slab: 487072 kB' 'SReclaimable: 157952 kB' 'SUnreclaim: 329120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.652 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.653 node0=512 expecting 512 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:48.653 node1=1024 expecting 1024 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:48.653 00:03:48.653 real 0m3.533s 00:03:48.653 user 0m1.371s 00:03:48.653 sys 0m2.199s 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:48.653 09:18:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.653 ************************************ 00:03:48.653 END TEST custom_alloc 00:03:48.653 ************************************ 00:03:48.653 09:18:20 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:48.653 09:18:20 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:48.653 09:18:20 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:48.653 09:18:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.653 ************************************ 00:03:48.653 START TEST no_shrink_alloc 00:03:48.653 ************************************ 00:03:48.653 09:18:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:03:48.653 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:48.653 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.653 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:48.653 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:48.653 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.654 09:18:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.958 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:51.958 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105467332 kB' 'MemAvailable: 108728480 kB' 'Buffers: 2704 kB' 'Cached: 14351072 kB' 'SwapCached: 0 kB' 'Active: 11390280 kB' 'Inactive: 3516604 kB' 'Active(anon): 10974752 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556024 kB' 'Mapped: 192124 kB' 'Shmem: 10421644 kB' 'KReclaimable: 306692 kB' 'Slab: 1137048 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 830356 kB' 'KernelStack: 27392 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12458996 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235072 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.958 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.959 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105467864 kB' 'MemAvailable: 108729012 kB' 'Buffers: 2704 kB' 'Cached: 14351072 kB' 'SwapCached: 0 kB' 'Active: 11390396 kB' 'Inactive: 3516604 kB' 'Active(anon): 10974868 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556148 kB' 'Mapped: 192100 kB' 'Shmem: 10421644 kB' 'KReclaimable: 306692 kB' 'Slab: 1137032 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 830340 kB' 'KernelStack: 27344 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12459012 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235056 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.960 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.961 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105468256 kB' 'MemAvailable: 108729404 kB' 'Buffers: 2704 kB' 'Cached: 14351092 kB' 'SwapCached: 0 kB' 'Active: 11389984 kB' 'Inactive: 3516604 kB' 'Active(anon): 10974456 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556128 kB' 'Mapped: 192024 kB' 'Shmem: 10421664 kB' 'KReclaimable: 306692 kB' 'Slab: 1137016 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 830324 kB' 'KernelStack: 27344 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12459036 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235056 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.962 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.963 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:51.964 nr_hugepages=1024 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.964 resv_hugepages=0 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.964 surplus_hugepages=0 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.964 anon_hugepages=0 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105468592 kB' 'MemAvailable: 108729740 kB' 'Buffers: 2704 kB' 'Cached: 14351132 kB' 'SwapCached: 0 kB' 'Active: 11389656 kB' 'Inactive: 3516604 kB' 'Active(anon): 10974128 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555728 kB' 'Mapped: 192024 kB' 'Shmem: 10421704 kB' 'KReclaimable: 306692 kB' 'Slab: 1137016 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 830324 kB' 'KernelStack: 27328 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12459056 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235056 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.964 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.965 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.965 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.965 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.965 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.965 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.965 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:51.965 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.965 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.965 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.228 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53215684 kB' 'MemUsed: 12443324 kB' 'SwapCached: 0 kB' 'Active: 4453364 kB' 'Inactive: 3323076 kB' 'Active(anon): 4311088 kB' 'Inactive(anon): 0 kB' 'Active(file): 142276 kB' 'Inactive(file): 3323076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7546940 kB' 'Mapped: 62980 kB' 'AnonPages: 232576 kB' 'Shmem: 4081588 kB' 'KernelStack: 12632 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148740 kB' 'Slab: 649960 kB' 'SReclaimable: 148740 kB' 'SUnreclaim: 501220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.229 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.230 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:52.231 node0=1024 expecting 1024 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.231 09:18:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.536 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:55.536 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.536 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105462256 kB' 'MemAvailable: 108723404 kB' 'Buffers: 2704 kB' 'Cached: 14351228 kB' 'SwapCached: 0 kB' 'Active: 11391384 kB' 'Inactive: 3516604 kB' 'Active(anon): 10975856 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556884 kB' 'Mapped: 192140 kB' 'Shmem: 10421800 kB' 'KReclaimable: 306692 kB' 'Slab: 1137332 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 830640 kB' 'KernelStack: 27376 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12459996 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235056 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.537 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105463840 kB' 'MemAvailable: 108724988 kB' 'Buffers: 2704 kB' 'Cached: 14351232 kB' 'SwapCached: 0 kB' 'Active: 11390908 kB' 'Inactive: 3516604 kB' 'Active(anon): 10975380 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556844 kB' 'Mapped: 192040 kB' 'Shmem: 10421804 kB' 'KReclaimable: 306692 kB' 'Slab: 1137316 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 830624 kB' 'KernelStack: 27344 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12460012 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235040 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.538 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.539 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105464808 kB' 'MemAvailable: 108725956 kB' 'Buffers: 2704 kB' 'Cached: 14351252 kB' 'SwapCached: 0 kB' 'Active: 11390936 kB' 'Inactive: 3516604 kB' 'Active(anon): 10975408 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556844 kB' 'Mapped: 192040 kB' 'Shmem: 10421824 kB' 'KReclaimable: 306692 kB' 'Slab: 1137316 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 830624 kB' 'KernelStack: 27344 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12460036 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235040 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.540 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.541 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.542 nr_hugepages=1024 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.542 resv_hugepages=0 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.542 surplus_hugepages=0 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.542 anon_hugepages=0 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.542 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338836 kB' 'MemFree: 105465004 kB' 'MemAvailable: 108726152 kB' 'Buffers: 2704 kB' 'Cached: 14351272 kB' 'SwapCached: 0 kB' 'Active: 11391076 kB' 'Inactive: 3516604 kB' 'Active(anon): 10975548 kB' 'Inactive(anon): 0 kB' 'Active(file): 415528 kB' 'Inactive(file): 3516604 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557008 kB' 'Mapped: 192040 kB' 'Shmem: 10421844 kB' 'KReclaimable: 306692 kB' 'Slab: 1137316 kB' 'SReclaimable: 306692 kB' 'SUnreclaim: 830624 kB' 'KernelStack: 27328 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 12461296 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235024 kB' 'VmallocChunk: 0 kB' 'Percpu: 108864 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4425076 kB' 'DirectMap2M: 29857792 kB' 'DirectMap1G: 101711872 kB' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.543 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.544 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53231980 kB' 'MemUsed: 12427028 kB' 'SwapCached: 0 kB' 'Active: 4455092 kB' 'Inactive: 3323076 kB' 'Active(anon): 4312816 kB' 'Inactive(anon): 0 kB' 'Active(file): 142276 kB' 'Inactive(file): 3323076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7546992 kB' 'Mapped: 62980 kB' 'AnonPages: 234340 kB' 'Shmem: 4081640 kB' 'KernelStack: 12632 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 148740 kB' 'Slab: 650304 kB' 'SReclaimable: 148740 kB' 'SUnreclaim: 501564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.545 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.547 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.548 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.811 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.811 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.811 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.811 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:55.811 node0=1024 expecting 1024 00:03:55.811 09:18:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.811 00:03:55.811 real 0m6.939s 00:03:55.811 user 0m2.716s 00:03:55.811 sys 0m4.337s 00:03:55.811 09:18:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:55.811 09:18:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.811 ************************************ 00:03:55.811 END TEST no_shrink_alloc 00:03:55.811 ************************************ 00:03:55.811 09:18:27 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:55.811 09:18:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:55.811 09:18:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.811 09:18:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.811 09:18:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.811 09:18:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.811 09:18:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.811 09:18:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.811 09:18:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.811 09:18:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.811 09:18:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.811 09:18:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.811 09:18:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:55.812 09:18:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:55.812 00:03:55.812 real 0m25.428s 00:03:55.812 user 0m9.924s 00:03:55.812 sys 0m15.868s 00:03:55.812 09:18:27 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:55.812 09:18:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.812 ************************************ 00:03:55.812 END TEST hugepages 00:03:55.812 ************************************ 00:03:55.812 09:18:27 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:55.812 09:18:27 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:55.812 09:18:27 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:55.812 09:18:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.812 ************************************ 00:03:55.812 START TEST driver 00:03:55.812 ************************************ 00:03:55.812 09:18:27 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:55.812 * Looking for test storage... 00:03:55.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:55.812 09:18:27 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:55.812 09:18:27 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.812 09:18:27 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.136 09:18:32 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:01.136 09:18:32 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:01.136 09:18:32 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:01.136 09:18:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:01.136 ************************************ 00:04:01.136 START TEST guess_driver 00:04:01.136 ************************************ 00:04:01.136 09:18:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:04:01.136 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:01.136 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:01.136 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:01.137 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:01.137 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:01.137 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:01.137 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:01.137 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:01.137 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:01.137 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:01.137 Looking for driver=vfio-pci 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.137 09:18:32 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.680 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.680 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.680 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.680 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.680 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.680 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.680 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.680 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.680 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.941 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.942 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.942 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.942 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.942 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.942 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.942 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.942 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:03.942 09:18:35 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:03.942 09:18:35 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.942 09:18:35 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.231 00:04:09.231 real 0m8.361s 00:04:09.231 user 0m2.700s 00:04:09.231 sys 0m4.803s 00:04:09.231 09:18:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:09.231 09:18:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:09.231 ************************************ 00:04:09.231 END TEST guess_driver 00:04:09.231 ************************************ 00:04:09.231 00:04:09.231 real 0m13.103s 00:04:09.231 user 0m4.037s 00:04:09.231 sys 0m7.331s 00:04:09.231 09:18:40 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:09.231 09:18:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:09.231 ************************************ 00:04:09.231 END TEST driver 00:04:09.231 ************************************ 00:04:09.231 09:18:40 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:09.231 09:18:40 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:09.231 09:18:40 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:09.232 09:18:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:09.232 ************************************ 00:04:09.232 START TEST devices 00:04:09.232 ************************************ 00:04:09.232 09:18:40 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:09.232 * Looking for test storage... 00:04:09.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:09.232 09:18:40 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:09.232 09:18:40 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:09.232 09:18:40 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.232 09:18:40 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:13.439 09:18:44 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:13.439 09:18:44 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:13.439 09:18:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:13.439 09:18:44 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:13.439 09:18:44 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:13.439 09:18:44 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:13.439 09:18:44 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:13.439 09:18:44 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:13.439 09:18:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:13.439 09:18:44 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:13.439 No valid GPT data, bailing 00:04:13.439 09:18:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:13.439 09:18:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:13.439 09:18:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:13.439 09:18:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:13.439 09:18:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:13.439 09:18:44 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:13.439 09:18:44 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:13.439 09:18:44 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:13.439 09:18:44 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:13.439 09:18:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:13.439 ************************************ 00:04:13.439 START TEST nvme_mount 00:04:13.439 ************************************ 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:13.439 09:18:44 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:14.011 Creating new GPT entries in memory. 00:04:14.011 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:14.011 other utilities. 00:04:14.011 09:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:14.011 09:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.011 09:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.011 09:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.011 09:18:45 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:14.957 Creating new GPT entries in memory. 00:04:14.957 The operation has completed successfully. 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 869951 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.957 09:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:17.766 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:17.766 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.026 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:18.026 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:18.026 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:18.026 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.026 09:18:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:21.331 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:21.332 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:21.332 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:21.332 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:21.332 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.332 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:21.332 09:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:21.332 09:18:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.332 09:18:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.710 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.710 00:04:24.710 real 0m11.790s 00:04:24.710 user 0m3.281s 00:04:24.710 sys 0m6.222s 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:24.710 09:18:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:24.710 ************************************ 00:04:24.710 END TEST nvme_mount 00:04:24.710 ************************************ 00:04:24.710 09:18:56 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:24.710 09:18:56 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:24.710 09:18:56 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:24.710 09:18:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:24.710 ************************************ 00:04:24.710 START TEST dm_mount 00:04:24.710 ************************************ 00:04:24.710 09:18:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:04:24.710 09:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:24.710 09:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:24.710 09:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:24.710 09:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:24.710 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:24.710 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:24.710 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:24.710 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.710 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:24.710 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:24.710 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.711 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.711 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.711 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:24.711 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.711 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.711 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:24.711 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.711 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:24.711 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:24.711 09:18:56 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:25.654 Creating new GPT entries in memory. 00:04:25.654 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.654 other utilities. 00:04:25.654 09:18:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.654 09:18:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.654 09:18:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.654 09:18:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.655 09:18:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:27.042 Creating new GPT entries in memory. 00:04:27.042 The operation has completed successfully. 00:04:27.042 09:18:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:27.042 09:18:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.042 09:18:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.042 09:18:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.042 09:18:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:27.986 The operation has completed successfully. 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 875239 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.986 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:27.987 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:27.987 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:27.987 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.987 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:27.987 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.987 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:27.987 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:27.987 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.987 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.987 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:27.987 09:18:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.987 09:18:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.987 09:18:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.287 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.288 09:19:02 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:34.593 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:34.855 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:34.855 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:34.855 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.855 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:34.855 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:34.855 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:34.855 09:19:06 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:34.855 00:04:34.855 real 0m10.076s 00:04:34.855 user 0m2.677s 00:04:34.855 sys 0m4.458s 00:04:34.855 09:19:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:34.855 09:19:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:34.855 ************************************ 00:04:34.855 END TEST dm_mount 00:04:34.855 ************************************ 00:04:34.855 09:19:06 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:34.855 09:19:06 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:34.855 09:19:06 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.855 09:19:06 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.855 09:19:06 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:34.855 09:19:06 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.855 09:19:06 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:35.116 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:35.116 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:35.116 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:35.116 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:35.116 09:19:06 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:35.116 09:19:06 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:35.116 09:19:06 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:35.116 09:19:06 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.116 09:19:06 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:35.116 09:19:06 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:35.116 09:19:06 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:35.116 00:04:35.116 real 0m26.130s 00:04:35.116 user 0m7.484s 00:04:35.116 sys 0m13.289s 00:04:35.116 09:19:06 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:35.116 09:19:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:35.116 ************************************ 00:04:35.116 END TEST devices 00:04:35.116 ************************************ 00:04:35.116 00:04:35.116 real 1m29.967s 00:04:35.116 user 0m29.977s 00:04:35.116 sys 0m51.059s 00:04:35.116 09:19:06 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:35.116 09:19:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:35.116 ************************************ 00:04:35.116 END TEST setup.sh 00:04:35.116 ************************************ 00:04:35.116 09:19:06 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:38.419 Hugepages 00:04:38.419 node hugesize free / total 00:04:38.419 node0 1048576kB 0 / 0 00:04:38.419 node0 2048kB 2048 / 2048 00:04:38.419 node1 1048576kB 0 / 0 00:04:38.419 node1 2048kB 0 / 0 00:04:38.419 00:04:38.419 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.680 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:38.680 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:38.680 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:38.680 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:38.680 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:38.680 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:38.680 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:38.680 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:38.680 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:38.680 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:38.680 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:38.680 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:38.680 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:38.680 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:38.680 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:38.680 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:38.680 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:38.680 09:19:10 -- spdk/autotest.sh@130 -- # uname -s 00:04:38.680 09:19:10 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:38.680 09:19:10 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:38.680 09:19:10 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:41.981 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:41.981 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:43.893 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:43.893 09:19:15 -- common/autotest_common.sh@1531 -- # sleep 1 00:04:44.833 09:19:16 -- common/autotest_common.sh@1532 -- # bdfs=() 00:04:44.833 09:19:16 -- common/autotest_common.sh@1532 -- # local bdfs 00:04:44.833 09:19:16 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:04:44.833 09:19:16 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:04:44.833 09:19:16 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:44.833 09:19:16 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:44.834 09:19:16 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:44.834 09:19:16 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:44.834 09:19:16 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:44.834 09:19:16 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:44.834 09:19:16 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:04:44.834 09:19:16 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:48.176 Waiting for block devices as requested 00:04:48.176 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:48.176 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:48.176 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:48.176 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:48.176 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:48.176 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:48.176 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:48.437 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:48.437 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:48.437 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:48.698 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:48.698 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:48.698 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:48.959 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:48.959 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:48.959 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:49.220 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:49.220 09:19:20 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:04:49.220 09:19:20 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:49.220 09:19:20 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:04:49.220 09:19:20 -- common/autotest_common.sh@1501 -- # grep 0000:65:00.0/nvme/nvme 00:04:49.220 09:19:20 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:49.220 09:19:20 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:49.220 09:19:20 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:49.220 09:19:20 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:04:49.220 09:19:20 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:04:49.220 09:19:20 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:04:49.220 09:19:20 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:04:49.220 09:19:20 -- common/autotest_common.sh@1544 -- # grep oacs 00:04:49.220 09:19:20 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:04:49.220 09:19:20 -- common/autotest_common.sh@1544 -- # oacs=' 0x5f' 00:04:49.220 09:19:20 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:04:49.220 09:19:20 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:04:49.220 09:19:20 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:04:49.220 09:19:20 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:04:49.220 09:19:20 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:04:49.220 09:19:20 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:04:49.220 09:19:20 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:04:49.220 09:19:20 -- common/autotest_common.sh@1556 -- # continue 00:04:49.220 09:19:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:49.220 09:19:20 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:49.220 09:19:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.220 09:19:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:49.220 09:19:20 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:49.220 09:19:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.220 09:19:20 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.524 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:52.524 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:52.524 09:19:24 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:52.524 09:19:24 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:52.524 09:19:24 -- common/autotest_common.sh@10 -- # set +x 00:04:52.786 09:19:24 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:52.786 09:19:24 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:04:52.786 09:19:24 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:04:52.786 09:19:24 -- common/autotest_common.sh@1576 -- # bdfs=() 00:04:52.786 09:19:24 -- common/autotest_common.sh@1576 -- # local bdfs 00:04:52.786 09:19:24 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:04:52.786 09:19:24 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:52.786 09:19:24 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:52.786 09:19:24 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.786 09:19:24 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:52.786 09:19:24 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:52.786 09:19:24 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:52.786 09:19:24 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:04:52.786 09:19:24 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:04:52.786 09:19:24 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:52.786 09:19:24 -- common/autotest_common.sh@1579 -- # device=0xa80a 00:04:52.786 09:19:24 -- common/autotest_common.sh@1580 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:52.786 09:19:24 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:04:52.786 09:19:24 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:04:52.786 09:19:24 -- common/autotest_common.sh@1592 -- # return 0 00:04:52.786 09:19:24 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:52.786 09:19:24 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:52.786 09:19:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:52.786 09:19:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:52.786 09:19:24 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:52.786 09:19:24 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:52.786 09:19:24 -- common/autotest_common.sh@10 -- # set +x 00:04:52.786 09:19:24 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:52.786 09:19:24 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:52.786 09:19:24 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:52.786 09:19:24 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:52.786 09:19:24 -- common/autotest_common.sh@10 -- # set +x 00:04:52.786 ************************************ 00:04:52.786 START TEST env 00:04:52.786 ************************************ 00:04:52.786 09:19:24 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:53.047 * Looking for test storage... 00:04:53.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:53.047 09:19:24 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:53.047 09:19:24 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:53.047 09:19:24 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.047 09:19:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.047 ************************************ 00:04:53.047 START TEST env_memory 00:04:53.047 ************************************ 00:04:53.048 09:19:24 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:53.048 00:04:53.048 00:04:53.048 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.048 http://cunit.sourceforge.net/ 00:04:53.048 00:04:53.048 00:04:53.048 Suite: memory 00:04:53.048 Test: alloc and free memory map ...[2024-06-11 09:19:24.704635] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:53.048 passed 00:04:53.048 Test: mem map translation ...[2024-06-11 09:19:24.730285] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:53.048 [2024-06-11 09:19:24.730321] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:53.048 [2024-06-11 09:19:24.730370] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:53.048 [2024-06-11 09:19:24.730377] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:53.048 passed 00:04:53.048 Test: mem map registration ...[2024-06-11 09:19:24.785564] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:53.048 [2024-06-11 09:19:24.785589] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:53.048 passed 00:04:53.048 Test: mem map adjacent registrations ...passed 00:04:53.048 00:04:53.048 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.048 suites 1 1 n/a 0 0 00:04:53.048 tests 4 4 4 0 0 00:04:53.048 asserts 152 152 152 0 n/a 00:04:53.048 00:04:53.048 Elapsed time = 0.193 seconds 00:04:53.048 00:04:53.048 real 0m0.207s 00:04:53.048 user 0m0.195s 00:04:53.048 sys 0m0.012s 00:04:53.048 09:19:24 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.048 09:19:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:53.048 ************************************ 00:04:53.048 END TEST env_memory 00:04:53.048 ************************************ 00:04:53.310 09:19:24 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.310 09:19:24 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:53.310 09:19:24 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.310 09:19:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.310 ************************************ 00:04:53.310 START TEST env_vtophys 00:04:53.310 ************************************ 00:04:53.310 09:19:24 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.310 EAL: lib.eal log level changed from notice to debug 00:04:53.310 EAL: Detected lcore 0 as core 0 on socket 0 00:04:53.310 EAL: Detected lcore 1 as core 1 on socket 0 00:04:53.310 EAL: Detected lcore 2 as core 2 on socket 0 00:04:53.310 EAL: Detected lcore 3 as core 3 on socket 0 00:04:53.310 EAL: Detected lcore 4 as core 4 on socket 0 00:04:53.310 EAL: Detected lcore 5 as core 5 on socket 0 00:04:53.310 EAL: Detected lcore 6 as core 6 on socket 0 00:04:53.310 EAL: Detected lcore 7 as core 7 on socket 0 00:04:53.310 EAL: Detected lcore 8 as core 8 on socket 0 00:04:53.310 EAL: Detected lcore 9 as core 9 on socket 0 00:04:53.310 EAL: Detected lcore 10 as core 10 on socket 0 00:04:53.310 EAL: Detected lcore 11 as core 11 on socket 0 00:04:53.310 EAL: Detected lcore 12 as core 12 on socket 0 00:04:53.310 EAL: Detected lcore 13 as core 13 on socket 0 00:04:53.310 EAL: Detected lcore 14 as core 14 on socket 0 00:04:53.310 EAL: Detected lcore 15 as core 15 on socket 0 00:04:53.310 EAL: Detected lcore 16 as core 16 on socket 0 00:04:53.310 EAL: Detected lcore 17 as core 17 on socket 0 00:04:53.310 EAL: Detected lcore 18 as core 18 on socket 0 00:04:53.310 EAL: Detected lcore 19 as core 19 on socket 0 00:04:53.310 EAL: Detected lcore 20 as core 20 on socket 0 00:04:53.310 EAL: Detected lcore 21 as core 21 on socket 0 00:04:53.310 EAL: Detected lcore 22 as core 22 on socket 0 00:04:53.310 EAL: Detected lcore 23 as core 23 on socket 0 00:04:53.310 EAL: Detected lcore 24 as core 24 on socket 0 00:04:53.310 EAL: Detected lcore 25 as core 25 on socket 0 00:04:53.310 EAL: Detected lcore 26 as core 26 on socket 0 00:04:53.310 EAL: Detected lcore 27 as core 27 on socket 0 00:04:53.310 EAL: Detected lcore 28 as core 28 on socket 0 00:04:53.310 EAL: Detected lcore 29 as core 29 on socket 0 00:04:53.310 EAL: Detected lcore 30 as core 30 on socket 0 00:04:53.310 EAL: Detected lcore 31 as core 31 on socket 0 00:04:53.310 EAL: Detected lcore 32 as core 32 on socket 0 00:04:53.310 EAL: Detected lcore 33 as core 33 on socket 0 00:04:53.310 EAL: Detected lcore 34 as core 34 on socket 0 00:04:53.310 EAL: Detected lcore 35 as core 35 on socket 0 00:04:53.310 EAL: Detected lcore 36 as core 0 on socket 1 00:04:53.310 EAL: Detected lcore 37 as core 1 on socket 1 00:04:53.310 EAL: Detected lcore 38 as core 2 on socket 1 00:04:53.310 EAL: Detected lcore 39 as core 3 on socket 1 00:04:53.310 EAL: Detected lcore 40 as core 4 on socket 1 00:04:53.310 EAL: Detected lcore 41 as core 5 on socket 1 00:04:53.310 EAL: Detected lcore 42 as core 6 on socket 1 00:04:53.310 EAL: Detected lcore 43 as core 7 on socket 1 00:04:53.310 EAL: Detected lcore 44 as core 8 on socket 1 00:04:53.310 EAL: Detected lcore 45 as core 9 on socket 1 00:04:53.310 EAL: Detected lcore 46 as core 10 on socket 1 00:04:53.310 EAL: Detected lcore 47 as core 11 on socket 1 00:04:53.310 EAL: Detected lcore 48 as core 12 on socket 1 00:04:53.310 EAL: Detected lcore 49 as core 13 on socket 1 00:04:53.310 EAL: Detected lcore 50 as core 14 on socket 1 00:04:53.310 EAL: Detected lcore 51 as core 15 on socket 1 00:04:53.310 EAL: Detected lcore 52 as core 16 on socket 1 00:04:53.310 EAL: Detected lcore 53 as core 17 on socket 1 00:04:53.311 EAL: Detected lcore 54 as core 18 on socket 1 00:04:53.311 EAL: Detected lcore 55 as core 19 on socket 1 00:04:53.311 EAL: Detected lcore 56 as core 20 on socket 1 00:04:53.311 EAL: Detected lcore 57 as core 21 on socket 1 00:04:53.311 EAL: Detected lcore 58 as core 22 on socket 1 00:04:53.311 EAL: Detected lcore 59 as core 23 on socket 1 00:04:53.311 EAL: Detected lcore 60 as core 24 on socket 1 00:04:53.311 EAL: Detected lcore 61 as core 25 on socket 1 00:04:53.311 EAL: Detected lcore 62 as core 26 on socket 1 00:04:53.311 EAL: Detected lcore 63 as core 27 on socket 1 00:04:53.311 EAL: Detected lcore 64 as core 28 on socket 1 00:04:53.311 EAL: Detected lcore 65 as core 29 on socket 1 00:04:53.311 EAL: Detected lcore 66 as core 30 on socket 1 00:04:53.311 EAL: Detected lcore 67 as core 31 on socket 1 00:04:53.311 EAL: Detected lcore 68 as core 32 on socket 1 00:04:53.311 EAL: Detected lcore 69 as core 33 on socket 1 00:04:53.311 EAL: Detected lcore 70 as core 34 on socket 1 00:04:53.311 EAL: Detected lcore 71 as core 35 on socket 1 00:04:53.311 EAL: Detected lcore 72 as core 0 on socket 0 00:04:53.311 EAL: Detected lcore 73 as core 1 on socket 0 00:04:53.311 EAL: Detected lcore 74 as core 2 on socket 0 00:04:53.311 EAL: Detected lcore 75 as core 3 on socket 0 00:04:53.311 EAL: Detected lcore 76 as core 4 on socket 0 00:04:53.311 EAL: Detected lcore 77 as core 5 on socket 0 00:04:53.311 EAL: Detected lcore 78 as core 6 on socket 0 00:04:53.311 EAL: Detected lcore 79 as core 7 on socket 0 00:04:53.311 EAL: Detected lcore 80 as core 8 on socket 0 00:04:53.311 EAL: Detected lcore 81 as core 9 on socket 0 00:04:53.311 EAL: Detected lcore 82 as core 10 on socket 0 00:04:53.311 EAL: Detected lcore 83 as core 11 on socket 0 00:04:53.311 EAL: Detected lcore 84 as core 12 on socket 0 00:04:53.311 EAL: Detected lcore 85 as core 13 on socket 0 00:04:53.311 EAL: Detected lcore 86 as core 14 on socket 0 00:04:53.311 EAL: Detected lcore 87 as core 15 on socket 0 00:04:53.311 EAL: Detected lcore 88 as core 16 on socket 0 00:04:53.311 EAL: Detected lcore 89 as core 17 on socket 0 00:04:53.311 EAL: Detected lcore 90 as core 18 on socket 0 00:04:53.311 EAL: Detected lcore 91 as core 19 on socket 0 00:04:53.311 EAL: Detected lcore 92 as core 20 on socket 0 00:04:53.311 EAL: Detected lcore 93 as core 21 on socket 0 00:04:53.311 EAL: Detected lcore 94 as core 22 on socket 0 00:04:53.311 EAL: Detected lcore 95 as core 23 on socket 0 00:04:53.311 EAL: Detected lcore 96 as core 24 on socket 0 00:04:53.311 EAL: Detected lcore 97 as core 25 on socket 0 00:04:53.311 EAL: Detected lcore 98 as core 26 on socket 0 00:04:53.311 EAL: Detected lcore 99 as core 27 on socket 0 00:04:53.311 EAL: Detected lcore 100 as core 28 on socket 0 00:04:53.311 EAL: Detected lcore 101 as core 29 on socket 0 00:04:53.311 EAL: Detected lcore 102 as core 30 on socket 0 00:04:53.311 EAL: Detected lcore 103 as core 31 on socket 0 00:04:53.311 EAL: Detected lcore 104 as core 32 on socket 0 00:04:53.311 EAL: Detected lcore 105 as core 33 on socket 0 00:04:53.311 EAL: Detected lcore 106 as core 34 on socket 0 00:04:53.311 EAL: Detected lcore 107 as core 35 on socket 0 00:04:53.311 EAL: Detected lcore 108 as core 0 on socket 1 00:04:53.311 EAL: Detected lcore 109 as core 1 on socket 1 00:04:53.311 EAL: Detected lcore 110 as core 2 on socket 1 00:04:53.311 EAL: Detected lcore 111 as core 3 on socket 1 00:04:53.311 EAL: Detected lcore 112 as core 4 on socket 1 00:04:53.311 EAL: Detected lcore 113 as core 5 on socket 1 00:04:53.311 EAL: Detected lcore 114 as core 6 on socket 1 00:04:53.311 EAL: Detected lcore 115 as core 7 on socket 1 00:04:53.311 EAL: Detected lcore 116 as core 8 on socket 1 00:04:53.311 EAL: Detected lcore 117 as core 9 on socket 1 00:04:53.311 EAL: Detected lcore 118 as core 10 on socket 1 00:04:53.311 EAL: Detected lcore 119 as core 11 on socket 1 00:04:53.311 EAL: Detected lcore 120 as core 12 on socket 1 00:04:53.311 EAL: Detected lcore 121 as core 13 on socket 1 00:04:53.311 EAL: Detected lcore 122 as core 14 on socket 1 00:04:53.311 EAL: Detected lcore 123 as core 15 on socket 1 00:04:53.311 EAL: Detected lcore 124 as core 16 on socket 1 00:04:53.311 EAL: Detected lcore 125 as core 17 on socket 1 00:04:53.311 EAL: Detected lcore 126 as core 18 on socket 1 00:04:53.311 EAL: Detected lcore 127 as core 19 on socket 1 00:04:53.311 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:53.311 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:53.311 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:53.311 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:53.311 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:53.311 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:53.311 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:53.311 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:53.311 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:53.311 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:53.311 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:53.311 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:53.311 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:53.311 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:53.311 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:53.311 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:53.311 EAL: Maximum logical cores by configuration: 128 00:04:53.311 EAL: Detected CPU lcores: 128 00:04:53.311 EAL: Detected NUMA nodes: 2 00:04:53.311 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:53.311 EAL: Detected shared linkage of DPDK 00:04:53.311 EAL: No shared files mode enabled, IPC will be disabled 00:04:53.311 EAL: Bus pci wants IOVA as 'DC' 00:04:53.311 EAL: Buses did not request a specific IOVA mode. 00:04:53.311 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:53.311 EAL: Selected IOVA mode 'VA' 00:04:53.311 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.311 EAL: Probing VFIO support... 00:04:53.311 EAL: IOMMU type 1 (Type 1) is supported 00:04:53.311 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:53.311 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:53.311 EAL: VFIO support initialized 00:04:53.311 EAL: Ask a virtual area of 0x2e000 bytes 00:04:53.311 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:53.311 EAL: Setting up physically contiguous memory... 00:04:53.311 EAL: Setting maximum number of open files to 524288 00:04:53.311 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:53.311 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:53.311 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:53.311 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.311 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:53.311 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.311 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.311 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:53.311 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:53.311 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.311 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:53.311 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.311 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.311 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:53.311 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:53.311 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.311 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:53.311 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.311 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.311 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:53.311 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:53.311 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.311 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:53.311 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.311 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.311 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:53.311 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:53.311 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:53.311 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.311 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:53.311 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.311 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.311 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:53.311 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:53.311 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.311 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:53.311 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.311 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.311 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:53.311 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:53.311 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.311 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:53.311 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.311 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.311 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:53.311 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:53.311 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.311 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:53.311 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.311 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.311 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:53.311 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:53.311 EAL: Hugepages will be freed exactly as allocated. 00:04:53.311 EAL: No shared files mode enabled, IPC is disabled 00:04:53.311 EAL: No shared files mode enabled, IPC is disabled 00:04:53.311 EAL: TSC frequency is ~2400000 KHz 00:04:53.311 EAL: Main lcore 0 is ready (tid=7f0e00931a00;cpuset=[0]) 00:04:53.311 EAL: Trying to obtain current memory policy. 00:04:53.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.312 EAL: Restoring previous memory policy: 0 00:04:53.312 EAL: request: mp_malloc_sync 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: Heap on socket 0 was expanded by 2MB 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:53.312 EAL: Mem event callback 'spdk:(nil)' registered 00:04:53.312 00:04:53.312 00:04:53.312 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.312 http://cunit.sourceforge.net/ 00:04:53.312 00:04:53.312 00:04:53.312 Suite: components_suite 00:04:53.312 Test: vtophys_malloc_test ...passed 00:04:53.312 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:53.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.312 EAL: Restoring previous memory policy: 4 00:04:53.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.312 EAL: request: mp_malloc_sync 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: Heap on socket 0 was expanded by 4MB 00:04:53.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.312 EAL: request: mp_malloc_sync 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: Heap on socket 0 was shrunk by 4MB 00:04:53.312 EAL: Trying to obtain current memory policy. 00:04:53.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.312 EAL: Restoring previous memory policy: 4 00:04:53.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.312 EAL: request: mp_malloc_sync 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: Heap on socket 0 was expanded by 6MB 00:04:53.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.312 EAL: request: mp_malloc_sync 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: Heap on socket 0 was shrunk by 6MB 00:04:53.312 EAL: Trying to obtain current memory policy. 00:04:53.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.312 EAL: Restoring previous memory policy: 4 00:04:53.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.312 EAL: request: mp_malloc_sync 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: Heap on socket 0 was expanded by 10MB 00:04:53.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.312 EAL: request: mp_malloc_sync 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: Heap on socket 0 was shrunk by 10MB 00:04:53.312 EAL: Trying to obtain current memory policy. 00:04:53.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.312 EAL: Restoring previous memory policy: 4 00:04:53.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.312 EAL: request: mp_malloc_sync 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: Heap on socket 0 was expanded by 18MB 00:04:53.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.312 EAL: request: mp_malloc_sync 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: Heap on socket 0 was shrunk by 18MB 00:04:53.312 EAL: Trying to obtain current memory policy. 00:04:53.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.312 EAL: Restoring previous memory policy: 4 00:04:53.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.312 EAL: request: mp_malloc_sync 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: Heap on socket 0 was expanded by 34MB 00:04:53.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.312 EAL: request: mp_malloc_sync 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: Heap on socket 0 was shrunk by 34MB 00:04:53.312 EAL: Trying to obtain current memory policy. 00:04:53.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.312 EAL: Restoring previous memory policy: 4 00:04:53.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.312 EAL: request: mp_malloc_sync 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: Heap on socket 0 was expanded by 66MB 00:04:53.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.312 EAL: request: mp_malloc_sync 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: Heap on socket 0 was shrunk by 66MB 00:04:53.312 EAL: Trying to obtain current memory policy. 00:04:53.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.312 EAL: Restoring previous memory policy: 4 00:04:53.312 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.312 EAL: request: mp_malloc_sync 00:04:53.312 EAL: No shared files mode enabled, IPC is disabled 00:04:53.312 EAL: Heap on socket 0 was expanded by 130MB 00:04:53.573 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.573 EAL: request: mp_malloc_sync 00:04:53.573 EAL: No shared files mode enabled, IPC is disabled 00:04:53.573 EAL: Heap on socket 0 was shrunk by 130MB 00:04:53.573 EAL: Trying to obtain current memory policy. 00:04:53.573 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.573 EAL: Restoring previous memory policy: 4 00:04:53.573 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.573 EAL: request: mp_malloc_sync 00:04:53.573 EAL: No shared files mode enabled, IPC is disabled 00:04:53.573 EAL: Heap on socket 0 was expanded by 258MB 00:04:53.573 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.573 EAL: request: mp_malloc_sync 00:04:53.573 EAL: No shared files mode enabled, IPC is disabled 00:04:53.573 EAL: Heap on socket 0 was shrunk by 258MB 00:04:53.573 EAL: Trying to obtain current memory policy. 00:04:53.573 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.573 EAL: Restoring previous memory policy: 4 00:04:53.573 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.573 EAL: request: mp_malloc_sync 00:04:53.573 EAL: No shared files mode enabled, IPC is disabled 00:04:53.573 EAL: Heap on socket 0 was expanded by 514MB 00:04:53.573 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.841 EAL: request: mp_malloc_sync 00:04:53.841 EAL: No shared files mode enabled, IPC is disabled 00:04:53.841 EAL: Heap on socket 0 was shrunk by 514MB 00:04:53.841 EAL: Trying to obtain current memory policy. 00:04:53.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.841 EAL: Restoring previous memory policy: 4 00:04:53.841 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.841 EAL: request: mp_malloc_sync 00:04:53.841 EAL: No shared files mode enabled, IPC is disabled 00:04:53.841 EAL: Heap on socket 0 was expanded by 1026MB 00:04:54.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.109 EAL: request: mp_malloc_sync 00:04:54.109 EAL: No shared files mode enabled, IPC is disabled 00:04:54.109 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:54.109 passed 00:04:54.109 00:04:54.109 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.109 suites 1 1 n/a 0 0 00:04:54.109 tests 2 2 2 0 0 00:04:54.109 asserts 497 497 497 0 n/a 00:04:54.109 00:04:54.109 Elapsed time = 0.688 seconds 00:04:54.109 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.109 EAL: request: mp_malloc_sync 00:04:54.109 EAL: No shared files mode enabled, IPC is disabled 00:04:54.109 EAL: Heap on socket 0 was shrunk by 2MB 00:04:54.109 EAL: No shared files mode enabled, IPC is disabled 00:04:54.109 EAL: No shared files mode enabled, IPC is disabled 00:04:54.109 EAL: No shared files mode enabled, IPC is disabled 00:04:54.109 00:04:54.109 real 0m0.826s 00:04:54.109 user 0m0.436s 00:04:54.109 sys 0m0.363s 00:04:54.109 09:19:25 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:54.109 09:19:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:54.109 ************************************ 00:04:54.109 END TEST env_vtophys 00:04:54.109 ************************************ 00:04:54.109 09:19:25 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:54.109 09:19:25 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:54.109 09:19:25 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:54.109 09:19:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.109 ************************************ 00:04:54.109 START TEST env_pci 00:04:54.109 ************************************ 00:04:54.109 09:19:25 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:54.109 00:04:54.109 00:04:54.109 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.109 http://cunit.sourceforge.net/ 00:04:54.109 00:04:54.109 00:04:54.109 Suite: pci 00:04:54.109 Test: pci_hook ...[2024-06-11 09:19:25.860361] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 886651 has claimed it 00:04:54.109 EAL: Cannot find device (10000:00:01.0) 00:04:54.109 EAL: Failed to attach device on primary process 00:04:54.109 passed 00:04:54.109 00:04:54.109 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.109 suites 1 1 n/a 0 0 00:04:54.109 tests 1 1 1 0 0 00:04:54.109 asserts 25 25 25 0 n/a 00:04:54.109 00:04:54.109 Elapsed time = 0.030 seconds 00:04:54.109 00:04:54.109 real 0m0.051s 00:04:54.109 user 0m0.015s 00:04:54.109 sys 0m0.036s 00:04:54.109 09:19:25 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:54.109 09:19:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:54.109 ************************************ 00:04:54.109 END TEST env_pci 00:04:54.109 ************************************ 00:04:54.371 09:19:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:54.371 09:19:25 env -- env/env.sh@15 -- # uname 00:04:54.371 09:19:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:54.371 09:19:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:54.371 09:19:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.371 09:19:25 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:04:54.371 09:19:25 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:54.371 09:19:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.371 ************************************ 00:04:54.371 START TEST env_dpdk_post_init 00:04:54.371 ************************************ 00:04:54.371 09:19:25 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.371 EAL: Detected CPU lcores: 128 00:04:54.371 EAL: Detected NUMA nodes: 2 00:04:54.371 EAL: Detected shared linkage of DPDK 00:04:54.371 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:54.371 EAL: Selected IOVA mode 'VA' 00:04:54.371 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.371 EAL: VFIO support initialized 00:04:54.371 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:54.371 EAL: Using IOMMU type 1 (Type 1) 00:04:54.632 EAL: Ignore mapping IO port bar(1) 00:04:54.632 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:54.893 EAL: Ignore mapping IO port bar(1) 00:04:54.893 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:54.893 EAL: Ignore mapping IO port bar(1) 00:04:55.154 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:55.154 EAL: Ignore mapping IO port bar(1) 00:04:55.415 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:55.415 EAL: Ignore mapping IO port bar(1) 00:04:55.676 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:55.676 EAL: Ignore mapping IO port bar(1) 00:04:55.676 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:55.937 EAL: Ignore mapping IO port bar(1) 00:04:55.937 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:56.198 EAL: Ignore mapping IO port bar(1) 00:04:56.198 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:56.459 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:56.459 EAL: Ignore mapping IO port bar(1) 00:04:56.719 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:56.719 EAL: Ignore mapping IO port bar(1) 00:04:56.980 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:56.980 EAL: Ignore mapping IO port bar(1) 00:04:57.241 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:57.241 EAL: Ignore mapping IO port bar(1) 00:04:57.241 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:57.502 EAL: Ignore mapping IO port bar(1) 00:04:57.502 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:57.763 EAL: Ignore mapping IO port bar(1) 00:04:57.763 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:58.023 EAL: Ignore mapping IO port bar(1) 00:04:58.023 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:58.023 EAL: Ignore mapping IO port bar(1) 00:04:58.284 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:58.284 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:58.284 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:58.284 Starting DPDK initialization... 00:04:58.284 Starting SPDK post initialization... 00:04:58.284 SPDK NVMe probe 00:04:58.284 Attaching to 0000:65:00.0 00:04:58.284 Attached to 0000:65:00.0 00:04:58.284 Cleaning up... 00:05:00.286 00:05:00.286 real 0m5.730s 00:05:00.286 user 0m0.179s 00:05:00.286 sys 0m0.108s 00:05:00.286 09:19:31 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:00.286 09:19:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.286 ************************************ 00:05:00.286 END TEST env_dpdk_post_init 00:05:00.286 ************************************ 00:05:00.286 09:19:31 env -- env/env.sh@26 -- # uname 00:05:00.286 09:19:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:00.286 09:19:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.286 09:19:31 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:00.286 09:19:31 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:00.286 09:19:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.286 ************************************ 00:05:00.286 START TEST env_mem_callbacks 00:05:00.286 ************************************ 00:05:00.286 09:19:31 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.286 EAL: Detected CPU lcores: 128 00:05:00.286 EAL: Detected NUMA nodes: 2 00:05:00.286 EAL: Detected shared linkage of DPDK 00:05:00.286 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.286 EAL: Selected IOVA mode 'VA' 00:05:00.286 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.286 EAL: VFIO support initialized 00:05:00.286 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:00.286 00:05:00.286 00:05:00.286 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.286 http://cunit.sourceforge.net/ 00:05:00.286 00:05:00.286 00:05:00.286 Suite: memory 00:05:00.286 Test: test ... 00:05:00.286 register 0x200000200000 2097152 00:05:00.286 malloc 3145728 00:05:00.286 register 0x200000400000 4194304 00:05:00.286 buf 0x200000500000 len 3145728 PASSED 00:05:00.286 malloc 64 00:05:00.286 buf 0x2000004fff40 len 64 PASSED 00:05:00.286 malloc 4194304 00:05:00.286 register 0x200000800000 6291456 00:05:00.286 buf 0x200000a00000 len 4194304 PASSED 00:05:00.286 free 0x200000500000 3145728 00:05:00.286 free 0x2000004fff40 64 00:05:00.286 unregister 0x200000400000 4194304 PASSED 00:05:00.286 free 0x200000a00000 4194304 00:05:00.286 unregister 0x200000800000 6291456 PASSED 00:05:00.286 malloc 8388608 00:05:00.286 register 0x200000400000 10485760 00:05:00.286 buf 0x200000600000 len 8388608 PASSED 00:05:00.286 free 0x200000600000 8388608 00:05:00.286 unregister 0x200000400000 10485760 PASSED 00:05:00.286 passed 00:05:00.286 00:05:00.286 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.286 suites 1 1 n/a 0 0 00:05:00.286 tests 1 1 1 0 0 00:05:00.286 asserts 15 15 15 0 n/a 00:05:00.286 00:05:00.286 Elapsed time = 0.010 seconds 00:05:00.286 00:05:00.286 real 0m0.068s 00:05:00.286 user 0m0.025s 00:05:00.286 sys 0m0.043s 00:05:00.286 09:19:31 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:00.286 09:19:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:00.286 ************************************ 00:05:00.286 END TEST env_mem_callbacks 00:05:00.286 ************************************ 00:05:00.286 00:05:00.286 real 0m7.412s 00:05:00.286 user 0m1.027s 00:05:00.286 sys 0m0.926s 00:05:00.286 09:19:31 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:00.286 09:19:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.286 ************************************ 00:05:00.286 END TEST env 00:05:00.286 ************************************ 00:05:00.287 09:19:31 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:00.287 09:19:31 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:00.287 09:19:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:00.287 09:19:31 -- common/autotest_common.sh@10 -- # set +x 00:05:00.287 ************************************ 00:05:00.287 START TEST rpc 00:05:00.287 ************************************ 00:05:00.287 09:19:32 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:00.287 * Looking for test storage... 00:05:00.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.547 09:19:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=888096 00:05:00.547 09:19:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.547 09:19:32 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:00.547 09:19:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 888096 00:05:00.547 09:19:32 rpc -- common/autotest_common.sh@830 -- # '[' -z 888096 ']' 00:05:00.547 09:19:32 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.547 09:19:32 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:00.547 09:19:32 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.547 09:19:32 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:00.547 09:19:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.547 [2024-06-11 09:19:32.163983] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:00.547 [2024-06-11 09:19:32.164045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid888096 ] 00:05:00.547 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.547 [2024-06-11 09:19:32.243974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.547 [2024-06-11 09:19:32.338382] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:00.547 [2024-06-11 09:19:32.338435] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 888096' to capture a snapshot of events at runtime. 00:05:00.547 [2024-06-11 09:19:32.338445] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:00.547 [2024-06-11 09:19:32.338453] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:00.547 [2024-06-11 09:19:32.338461] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid888096 for offline analysis/debug. 00:05:00.547 [2024-06-11 09:19:32.338488] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.491 09:19:33 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:01.491 09:19:33 rpc -- common/autotest_common.sh@863 -- # return 0 00:05:01.491 09:19:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:01.491 09:19:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:01.491 09:19:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:01.491 09:19:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:01.491 09:19:33 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:01.491 09:19:33 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:01.491 09:19:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.491 ************************************ 00:05:01.491 START TEST rpc_integrity 00:05:01.491 ************************************ 00:05:01.491 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:01.491 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:01.491 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.491 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.491 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.491 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:01.491 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:01.491 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:01.491 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:01.491 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.491 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.491 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.491 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:01.491 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:01.491 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.491 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.491 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.491 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:01.491 { 00:05:01.491 "name": "Malloc0", 00:05:01.491 "aliases": [ 00:05:01.491 "7bab0552-9835-4556-ab32-7282856033c0" 00:05:01.492 ], 00:05:01.492 "product_name": "Malloc disk", 00:05:01.492 "block_size": 512, 00:05:01.492 "num_blocks": 16384, 00:05:01.492 "uuid": "7bab0552-9835-4556-ab32-7282856033c0", 00:05:01.492 "assigned_rate_limits": { 00:05:01.492 "rw_ios_per_sec": 0, 00:05:01.492 "rw_mbytes_per_sec": 0, 00:05:01.492 "r_mbytes_per_sec": 0, 00:05:01.492 "w_mbytes_per_sec": 0 00:05:01.492 }, 00:05:01.492 "claimed": false, 00:05:01.492 "zoned": false, 00:05:01.492 "supported_io_types": { 00:05:01.492 "read": true, 00:05:01.492 "write": true, 00:05:01.492 "unmap": true, 00:05:01.492 "write_zeroes": true, 00:05:01.492 "flush": true, 00:05:01.492 "reset": true, 00:05:01.492 "compare": false, 00:05:01.492 "compare_and_write": false, 00:05:01.492 "abort": true, 00:05:01.492 "nvme_admin": false, 00:05:01.492 "nvme_io": false 00:05:01.492 }, 00:05:01.492 "memory_domains": [ 00:05:01.492 { 00:05:01.492 "dma_device_id": "system", 00:05:01.492 "dma_device_type": 1 00:05:01.492 }, 00:05:01.492 { 00:05:01.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.492 "dma_device_type": 2 00:05:01.492 } 00:05:01.492 ], 00:05:01.492 "driver_specific": {} 00:05:01.492 } 00:05:01.492 ]' 00:05:01.492 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:01.492 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:01.492 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:01.492 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.492 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.492 [2024-06-11 09:19:33.212149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:01.492 [2024-06-11 09:19:33.212198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.492 [2024-06-11 09:19:33.212213] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ca5be0 00:05:01.492 [2024-06-11 09:19:33.212221] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.492 [2024-06-11 09:19:33.213783] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.492 [2024-06-11 09:19:33.213821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:01.492 Passthru0 00:05:01.492 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.492 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:01.492 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.492 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.492 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.492 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:01.492 { 00:05:01.492 "name": "Malloc0", 00:05:01.492 "aliases": [ 00:05:01.492 "7bab0552-9835-4556-ab32-7282856033c0" 00:05:01.492 ], 00:05:01.492 "product_name": "Malloc disk", 00:05:01.492 "block_size": 512, 00:05:01.492 "num_blocks": 16384, 00:05:01.492 "uuid": "7bab0552-9835-4556-ab32-7282856033c0", 00:05:01.492 "assigned_rate_limits": { 00:05:01.492 "rw_ios_per_sec": 0, 00:05:01.492 "rw_mbytes_per_sec": 0, 00:05:01.492 "r_mbytes_per_sec": 0, 00:05:01.492 "w_mbytes_per_sec": 0 00:05:01.492 }, 00:05:01.492 "claimed": true, 00:05:01.492 "claim_type": "exclusive_write", 00:05:01.492 "zoned": false, 00:05:01.492 "supported_io_types": { 00:05:01.492 "read": true, 00:05:01.492 "write": true, 00:05:01.492 "unmap": true, 00:05:01.492 "write_zeroes": true, 00:05:01.492 "flush": true, 00:05:01.492 "reset": true, 00:05:01.492 "compare": false, 00:05:01.492 "compare_and_write": false, 00:05:01.492 "abort": true, 00:05:01.492 "nvme_admin": false, 00:05:01.492 "nvme_io": false 00:05:01.492 }, 00:05:01.492 "memory_domains": [ 00:05:01.492 { 00:05:01.492 "dma_device_id": "system", 00:05:01.492 "dma_device_type": 1 00:05:01.492 }, 00:05:01.492 { 00:05:01.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.492 "dma_device_type": 2 00:05:01.492 } 00:05:01.492 ], 00:05:01.492 "driver_specific": {} 00:05:01.492 }, 00:05:01.492 { 00:05:01.492 "name": "Passthru0", 00:05:01.492 "aliases": [ 00:05:01.492 "dbb53a95-c49c-5fea-8edd-ac22bb14404d" 00:05:01.492 ], 00:05:01.492 "product_name": "passthru", 00:05:01.492 "block_size": 512, 00:05:01.492 "num_blocks": 16384, 00:05:01.492 "uuid": "dbb53a95-c49c-5fea-8edd-ac22bb14404d", 00:05:01.492 "assigned_rate_limits": { 00:05:01.492 "rw_ios_per_sec": 0, 00:05:01.492 "rw_mbytes_per_sec": 0, 00:05:01.492 "r_mbytes_per_sec": 0, 00:05:01.492 "w_mbytes_per_sec": 0 00:05:01.492 }, 00:05:01.492 "claimed": false, 00:05:01.492 "zoned": false, 00:05:01.492 "supported_io_types": { 00:05:01.492 "read": true, 00:05:01.492 "write": true, 00:05:01.492 "unmap": true, 00:05:01.492 "write_zeroes": true, 00:05:01.492 "flush": true, 00:05:01.492 "reset": true, 00:05:01.492 "compare": false, 00:05:01.492 "compare_and_write": false, 00:05:01.492 "abort": true, 00:05:01.492 "nvme_admin": false, 00:05:01.492 "nvme_io": false 00:05:01.492 }, 00:05:01.492 "memory_domains": [ 00:05:01.492 { 00:05:01.492 "dma_device_id": "system", 00:05:01.492 "dma_device_type": 1 00:05:01.492 }, 00:05:01.492 { 00:05:01.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.492 "dma_device_type": 2 00:05:01.492 } 00:05:01.492 ], 00:05:01.492 "driver_specific": { 00:05:01.492 "passthru": { 00:05:01.492 "name": "Passthru0", 00:05:01.492 "base_bdev_name": "Malloc0" 00:05:01.492 } 00:05:01.492 } 00:05:01.492 } 00:05:01.492 ]' 00:05:01.492 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:01.492 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:01.492 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:01.492 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.492 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.492 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.492 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:01.492 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.492 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.753 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.753 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.753 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.753 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.753 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.753 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.753 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.753 09:19:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.753 00:05:01.753 real 0m0.298s 00:05:01.753 user 0m0.194s 00:05:01.753 sys 0m0.035s 00:05:01.753 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:01.753 09:19:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.753 ************************************ 00:05:01.753 END TEST rpc_integrity 00:05:01.753 ************************************ 00:05:01.753 09:19:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:01.753 09:19:33 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:01.753 09:19:33 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:01.753 09:19:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.753 ************************************ 00:05:01.753 START TEST rpc_plugins 00:05:01.753 ************************************ 00:05:01.753 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:05:01.753 09:19:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:01.753 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.753 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.754 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.754 09:19:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:01.754 09:19:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:01.754 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.754 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.754 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.754 09:19:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:01.754 { 00:05:01.754 "name": "Malloc1", 00:05:01.754 "aliases": [ 00:05:01.754 "eaeeb1c0-fa4a-4bb7-8a3f-c914528eee55" 00:05:01.754 ], 00:05:01.754 "product_name": "Malloc disk", 00:05:01.754 "block_size": 4096, 00:05:01.754 "num_blocks": 256, 00:05:01.754 "uuid": "eaeeb1c0-fa4a-4bb7-8a3f-c914528eee55", 00:05:01.754 "assigned_rate_limits": { 00:05:01.754 "rw_ios_per_sec": 0, 00:05:01.754 "rw_mbytes_per_sec": 0, 00:05:01.754 "r_mbytes_per_sec": 0, 00:05:01.754 "w_mbytes_per_sec": 0 00:05:01.754 }, 00:05:01.754 "claimed": false, 00:05:01.754 "zoned": false, 00:05:01.754 "supported_io_types": { 00:05:01.754 "read": true, 00:05:01.754 "write": true, 00:05:01.754 "unmap": true, 00:05:01.754 "write_zeroes": true, 00:05:01.754 "flush": true, 00:05:01.754 "reset": true, 00:05:01.754 "compare": false, 00:05:01.754 "compare_and_write": false, 00:05:01.754 "abort": true, 00:05:01.754 "nvme_admin": false, 00:05:01.754 "nvme_io": false 00:05:01.754 }, 00:05:01.754 "memory_domains": [ 00:05:01.754 { 00:05:01.754 "dma_device_id": "system", 00:05:01.754 "dma_device_type": 1 00:05:01.754 }, 00:05:01.754 { 00:05:01.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.754 "dma_device_type": 2 00:05:01.754 } 00:05:01.754 ], 00:05:01.754 "driver_specific": {} 00:05:01.754 } 00:05:01.754 ]' 00:05:01.754 09:19:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:01.754 09:19:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:01.754 09:19:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:01.754 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.754 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.754 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.754 09:19:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:01.754 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.754 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:01.754 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.754 09:19:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:01.754 09:19:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:02.015 09:19:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:02.015 00:05:02.015 real 0m0.152s 00:05:02.015 user 0m0.096s 00:05:02.015 sys 0m0.019s 00:05:02.015 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:02.015 09:19:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.015 ************************************ 00:05:02.015 END TEST rpc_plugins 00:05:02.015 ************************************ 00:05:02.015 09:19:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:02.015 09:19:33 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:02.015 09:19:33 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:02.015 09:19:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.015 ************************************ 00:05:02.015 START TEST rpc_trace_cmd_test 00:05:02.015 ************************************ 00:05:02.015 09:19:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:05:02.015 09:19:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:02.015 09:19:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:02.015 09:19:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:02.015 09:19:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:02.015 09:19:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:02.015 09:19:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:02.015 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid888096", 00:05:02.015 "tpoint_group_mask": "0x8", 00:05:02.015 "iscsi_conn": { 00:05:02.015 "mask": "0x2", 00:05:02.015 "tpoint_mask": "0x0" 00:05:02.015 }, 00:05:02.015 "scsi": { 00:05:02.015 "mask": "0x4", 00:05:02.015 "tpoint_mask": "0x0" 00:05:02.015 }, 00:05:02.015 "bdev": { 00:05:02.015 "mask": "0x8", 00:05:02.015 "tpoint_mask": "0xffffffffffffffff" 00:05:02.015 }, 00:05:02.015 "nvmf_rdma": { 00:05:02.015 "mask": "0x10", 00:05:02.015 "tpoint_mask": "0x0" 00:05:02.015 }, 00:05:02.015 "nvmf_tcp": { 00:05:02.015 "mask": "0x20", 00:05:02.015 "tpoint_mask": "0x0" 00:05:02.015 }, 00:05:02.015 "ftl": { 00:05:02.015 "mask": "0x40", 00:05:02.015 "tpoint_mask": "0x0" 00:05:02.015 }, 00:05:02.015 "blobfs": { 00:05:02.015 "mask": "0x80", 00:05:02.015 "tpoint_mask": "0x0" 00:05:02.015 }, 00:05:02.015 "dsa": { 00:05:02.015 "mask": "0x200", 00:05:02.015 "tpoint_mask": "0x0" 00:05:02.015 }, 00:05:02.015 "thread": { 00:05:02.015 "mask": "0x400", 00:05:02.015 "tpoint_mask": "0x0" 00:05:02.015 }, 00:05:02.015 "nvme_pcie": { 00:05:02.015 "mask": "0x800", 00:05:02.015 "tpoint_mask": "0x0" 00:05:02.015 }, 00:05:02.015 "iaa": { 00:05:02.015 "mask": "0x1000", 00:05:02.015 "tpoint_mask": "0x0" 00:05:02.015 }, 00:05:02.015 "nvme_tcp": { 00:05:02.015 "mask": "0x2000", 00:05:02.015 "tpoint_mask": "0x0" 00:05:02.015 }, 00:05:02.015 "bdev_nvme": { 00:05:02.015 "mask": "0x4000", 00:05:02.015 "tpoint_mask": "0x0" 00:05:02.015 }, 00:05:02.015 "sock": { 00:05:02.015 "mask": "0x8000", 00:05:02.015 "tpoint_mask": "0x0" 00:05:02.015 } 00:05:02.015 }' 00:05:02.015 09:19:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:02.015 09:19:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:02.016 09:19:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:02.016 09:19:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:02.016 09:19:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:02.277 09:19:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:02.277 09:19:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:02.277 09:19:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:02.277 09:19:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:02.277 09:19:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:02.277 00:05:02.277 real 0m0.251s 00:05:02.277 user 0m0.217s 00:05:02.277 sys 0m0.027s 00:05:02.277 09:19:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:02.277 09:19:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:02.277 ************************************ 00:05:02.277 END TEST rpc_trace_cmd_test 00:05:02.277 ************************************ 00:05:02.277 09:19:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:02.277 09:19:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:02.277 09:19:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:02.277 09:19:33 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:02.277 09:19:33 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:02.277 09:19:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.277 ************************************ 00:05:02.277 START TEST rpc_daemon_integrity 00:05:02.277 ************************************ 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:02.277 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:02.539 { 00:05:02.539 "name": "Malloc2", 00:05:02.539 "aliases": [ 00:05:02.539 "83952c4a-05db-49b1-85a7-781768c12061" 00:05:02.539 ], 00:05:02.539 "product_name": "Malloc disk", 00:05:02.539 "block_size": 512, 00:05:02.539 "num_blocks": 16384, 00:05:02.539 "uuid": "83952c4a-05db-49b1-85a7-781768c12061", 00:05:02.539 "assigned_rate_limits": { 00:05:02.539 "rw_ios_per_sec": 0, 00:05:02.539 "rw_mbytes_per_sec": 0, 00:05:02.539 "r_mbytes_per_sec": 0, 00:05:02.539 "w_mbytes_per_sec": 0 00:05:02.539 }, 00:05:02.539 "claimed": false, 00:05:02.539 "zoned": false, 00:05:02.539 "supported_io_types": { 00:05:02.539 "read": true, 00:05:02.539 "write": true, 00:05:02.539 "unmap": true, 00:05:02.539 "write_zeroes": true, 00:05:02.539 "flush": true, 00:05:02.539 "reset": true, 00:05:02.539 "compare": false, 00:05:02.539 "compare_and_write": false, 00:05:02.539 "abort": true, 00:05:02.539 "nvme_admin": false, 00:05:02.539 "nvme_io": false 00:05:02.539 }, 00:05:02.539 "memory_domains": [ 00:05:02.539 { 00:05:02.539 "dma_device_id": "system", 00:05:02.539 "dma_device_type": 1 00:05:02.539 }, 00:05:02.539 { 00:05:02.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.539 "dma_device_type": 2 00:05:02.539 } 00:05:02.539 ], 00:05:02.539 "driver_specific": {} 00:05:02.539 } 00:05:02.539 ]' 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.539 [2024-06-11 09:19:34.146692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:02.539 [2024-06-11 09:19:34.146737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:02.539 [2024-06-11 09:19:34.146755] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c9d4b0 00:05:02.539 [2024-06-11 09:19:34.146762] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:02.539 [2024-06-11 09:19:34.148145] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:02.539 [2024-06-11 09:19:34.148180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:02.539 Passthru0 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:02.539 { 00:05:02.539 "name": "Malloc2", 00:05:02.539 "aliases": [ 00:05:02.539 "83952c4a-05db-49b1-85a7-781768c12061" 00:05:02.539 ], 00:05:02.539 "product_name": "Malloc disk", 00:05:02.539 "block_size": 512, 00:05:02.539 "num_blocks": 16384, 00:05:02.539 "uuid": "83952c4a-05db-49b1-85a7-781768c12061", 00:05:02.539 "assigned_rate_limits": { 00:05:02.539 "rw_ios_per_sec": 0, 00:05:02.539 "rw_mbytes_per_sec": 0, 00:05:02.539 "r_mbytes_per_sec": 0, 00:05:02.539 "w_mbytes_per_sec": 0 00:05:02.539 }, 00:05:02.539 "claimed": true, 00:05:02.539 "claim_type": "exclusive_write", 00:05:02.539 "zoned": false, 00:05:02.539 "supported_io_types": { 00:05:02.539 "read": true, 00:05:02.539 "write": true, 00:05:02.539 "unmap": true, 00:05:02.539 "write_zeroes": true, 00:05:02.539 "flush": true, 00:05:02.539 "reset": true, 00:05:02.539 "compare": false, 00:05:02.539 "compare_and_write": false, 00:05:02.539 "abort": true, 00:05:02.539 "nvme_admin": false, 00:05:02.539 "nvme_io": false 00:05:02.539 }, 00:05:02.539 "memory_domains": [ 00:05:02.539 { 00:05:02.539 "dma_device_id": "system", 00:05:02.539 "dma_device_type": 1 00:05:02.539 }, 00:05:02.539 { 00:05:02.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.539 "dma_device_type": 2 00:05:02.539 } 00:05:02.539 ], 00:05:02.539 "driver_specific": {} 00:05:02.539 }, 00:05:02.539 { 00:05:02.539 "name": "Passthru0", 00:05:02.539 "aliases": [ 00:05:02.539 "3969ca5a-d497-5994-aee4-9c4c74f5c60b" 00:05:02.539 ], 00:05:02.539 "product_name": "passthru", 00:05:02.539 "block_size": 512, 00:05:02.539 "num_blocks": 16384, 00:05:02.539 "uuid": "3969ca5a-d497-5994-aee4-9c4c74f5c60b", 00:05:02.539 "assigned_rate_limits": { 00:05:02.539 "rw_ios_per_sec": 0, 00:05:02.539 "rw_mbytes_per_sec": 0, 00:05:02.539 "r_mbytes_per_sec": 0, 00:05:02.539 "w_mbytes_per_sec": 0 00:05:02.539 }, 00:05:02.539 "claimed": false, 00:05:02.539 "zoned": false, 00:05:02.539 "supported_io_types": { 00:05:02.539 "read": true, 00:05:02.539 "write": true, 00:05:02.539 "unmap": true, 00:05:02.539 "write_zeroes": true, 00:05:02.539 "flush": true, 00:05:02.539 "reset": true, 00:05:02.539 "compare": false, 00:05:02.539 "compare_and_write": false, 00:05:02.539 "abort": true, 00:05:02.539 "nvme_admin": false, 00:05:02.539 "nvme_io": false 00:05:02.539 }, 00:05:02.539 "memory_domains": [ 00:05:02.539 { 00:05:02.539 "dma_device_id": "system", 00:05:02.539 "dma_device_type": 1 00:05:02.539 }, 00:05:02.539 { 00:05:02.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.539 "dma_device_type": 2 00:05:02.539 } 00:05:02.539 ], 00:05:02.539 "driver_specific": { 00:05:02.539 "passthru": { 00:05:02.539 "name": "Passthru0", 00:05:02.539 "base_bdev_name": "Malloc2" 00:05:02.539 } 00:05:02.539 } 00:05:02.539 } 00:05:02.539 ]' 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:02.539 00:05:02.539 real 0m0.291s 00:05:02.539 user 0m0.191s 00:05:02.539 sys 0m0.036s 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:02.539 09:19:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.539 ************************************ 00:05:02.539 END TEST rpc_daemon_integrity 00:05:02.539 ************************************ 00:05:02.539 09:19:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:02.539 09:19:34 rpc -- rpc/rpc.sh@84 -- # killprocess 888096 00:05:02.539 09:19:34 rpc -- common/autotest_common.sh@949 -- # '[' -z 888096 ']' 00:05:02.539 09:19:34 rpc -- common/autotest_common.sh@953 -- # kill -0 888096 00:05:02.539 09:19:34 rpc -- common/autotest_common.sh@954 -- # uname 00:05:02.539 09:19:34 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:02.539 09:19:34 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 888096 00:05:02.801 09:19:34 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:02.801 09:19:34 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:02.801 09:19:34 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 888096' 00:05:02.801 killing process with pid 888096 00:05:02.801 09:19:34 rpc -- common/autotest_common.sh@968 -- # kill 888096 00:05:02.801 09:19:34 rpc -- common/autotest_common.sh@973 -- # wait 888096 00:05:03.096 00:05:03.096 real 0m2.636s 00:05:03.096 user 0m3.455s 00:05:03.096 sys 0m0.776s 00:05:03.096 09:19:34 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:03.096 09:19:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.096 ************************************ 00:05:03.096 END TEST rpc 00:05:03.096 ************************************ 00:05:03.096 09:19:34 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:03.096 09:19:34 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:03.096 09:19:34 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:03.096 09:19:34 -- common/autotest_common.sh@10 -- # set +x 00:05:03.096 ************************************ 00:05:03.096 START TEST skip_rpc 00:05:03.096 ************************************ 00:05:03.096 09:19:34 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:03.096 * Looking for test storage... 00:05:03.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.097 09:19:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.097 09:19:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:03.097 09:19:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:03.097 09:19:34 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:03.097 09:19:34 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:03.097 09:19:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.097 ************************************ 00:05:03.097 START TEST skip_rpc 00:05:03.097 ************************************ 00:05:03.097 09:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:05:03.097 09:19:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=888932 00:05:03.097 09:19:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.097 09:19:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:03.097 09:19:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:03.358 [2024-06-11 09:19:34.916680] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:03.358 [2024-06-11 09:19:34.916744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid888932 ] 00:05:03.358 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.358 [2024-06-11 09:19:34.996422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.359 [2024-06-11 09:19:35.092245] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 888932 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 888932 ']' 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 888932 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 888932 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 888932' 00:05:08.660 killing process with pid 888932 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 888932 00:05:08.660 09:19:39 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 888932 00:05:08.660 00:05:08.660 real 0m5.282s 00:05:08.660 user 0m5.057s 00:05:08.660 sys 0m0.262s 00:05:08.660 09:19:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:08.660 09:19:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.660 ************************************ 00:05:08.660 END TEST skip_rpc 00:05:08.660 ************************************ 00:05:08.660 09:19:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:08.660 09:19:40 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:08.660 09:19:40 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:08.660 09:19:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.660 ************************************ 00:05:08.661 START TEST skip_rpc_with_json 00:05:08.661 ************************************ 00:05:08.661 09:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:05:08.661 09:19:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:08.661 09:19:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=889978 00:05:08.661 09:19:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.661 09:19:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 889978 00:05:08.661 09:19:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.661 09:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 889978 ']' 00:05:08.661 09:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.661 09:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:08.661 09:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.661 09:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:08.661 09:19:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.661 [2024-06-11 09:19:40.285898] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:08.661 [2024-06-11 09:19:40.285953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889978 ] 00:05:08.661 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.661 [2024-06-11 09:19:40.363140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.661 [2024-06-11 09:19:40.430839] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.601 [2024-06-11 09:19:41.133848] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:09.601 request: 00:05:09.601 { 00:05:09.601 "trtype": "tcp", 00:05:09.601 "method": "nvmf_get_transports", 00:05:09.601 "req_id": 1 00:05:09.601 } 00:05:09.601 Got JSON-RPC error response 00:05:09.601 response: 00:05:09.601 { 00:05:09.601 "code": -19, 00:05:09.601 "message": "No such device" 00:05:09.601 } 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.601 [2024-06-11 09:19:41.145964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:09.601 09:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:09.601 { 00:05:09.601 "subsystems": [ 00:05:09.601 { 00:05:09.601 "subsystem": "vfio_user_target", 00:05:09.601 "config": null 00:05:09.601 }, 00:05:09.601 { 00:05:09.601 "subsystem": "keyring", 00:05:09.601 "config": [] 00:05:09.601 }, 00:05:09.601 { 00:05:09.601 "subsystem": "iobuf", 00:05:09.601 "config": [ 00:05:09.601 { 00:05:09.601 "method": "iobuf_set_options", 00:05:09.601 "params": { 00:05:09.601 "small_pool_count": 8192, 00:05:09.601 "large_pool_count": 1024, 00:05:09.601 "small_bufsize": 8192, 00:05:09.601 "large_bufsize": 135168 00:05:09.601 } 00:05:09.601 } 00:05:09.601 ] 00:05:09.601 }, 00:05:09.601 { 00:05:09.601 "subsystem": "sock", 00:05:09.601 "config": [ 00:05:09.601 { 00:05:09.601 "method": "sock_set_default_impl", 00:05:09.601 "params": { 00:05:09.601 "impl_name": "posix" 00:05:09.601 } 00:05:09.601 }, 00:05:09.601 { 00:05:09.601 "method": "sock_impl_set_options", 00:05:09.601 "params": { 00:05:09.601 "impl_name": "ssl", 00:05:09.601 "recv_buf_size": 4096, 00:05:09.601 "send_buf_size": 4096, 00:05:09.601 "enable_recv_pipe": true, 00:05:09.601 "enable_quickack": false, 00:05:09.602 "enable_placement_id": 0, 00:05:09.602 "enable_zerocopy_send_server": true, 00:05:09.602 "enable_zerocopy_send_client": false, 00:05:09.602 "zerocopy_threshold": 0, 00:05:09.602 "tls_version": 0, 00:05:09.602 "enable_ktls": false 00:05:09.602 } 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "method": "sock_impl_set_options", 00:05:09.602 "params": { 00:05:09.602 "impl_name": "posix", 00:05:09.602 "recv_buf_size": 2097152, 00:05:09.602 "send_buf_size": 2097152, 00:05:09.602 "enable_recv_pipe": true, 00:05:09.602 "enable_quickack": false, 00:05:09.602 "enable_placement_id": 0, 00:05:09.602 "enable_zerocopy_send_server": true, 00:05:09.602 "enable_zerocopy_send_client": false, 00:05:09.602 "zerocopy_threshold": 0, 00:05:09.602 "tls_version": 0, 00:05:09.602 "enable_ktls": false 00:05:09.602 } 00:05:09.602 } 00:05:09.602 ] 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "subsystem": "vmd", 00:05:09.602 "config": [] 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "subsystem": "accel", 00:05:09.602 "config": [ 00:05:09.602 { 00:05:09.602 "method": "accel_set_options", 00:05:09.602 "params": { 00:05:09.602 "small_cache_size": 128, 00:05:09.602 "large_cache_size": 16, 00:05:09.602 "task_count": 2048, 00:05:09.602 "sequence_count": 2048, 00:05:09.602 "buf_count": 2048 00:05:09.602 } 00:05:09.602 } 00:05:09.602 ] 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "subsystem": "bdev", 00:05:09.602 "config": [ 00:05:09.602 { 00:05:09.602 "method": "bdev_set_options", 00:05:09.602 "params": { 00:05:09.602 "bdev_io_pool_size": 65535, 00:05:09.602 "bdev_io_cache_size": 256, 00:05:09.602 "bdev_auto_examine": true, 00:05:09.602 "iobuf_small_cache_size": 128, 00:05:09.602 "iobuf_large_cache_size": 16 00:05:09.602 } 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "method": "bdev_raid_set_options", 00:05:09.602 "params": { 00:05:09.602 "process_window_size_kb": 1024 00:05:09.602 } 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "method": "bdev_iscsi_set_options", 00:05:09.602 "params": { 00:05:09.602 "timeout_sec": 30 00:05:09.602 } 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "method": "bdev_nvme_set_options", 00:05:09.602 "params": { 00:05:09.602 "action_on_timeout": "none", 00:05:09.602 "timeout_us": 0, 00:05:09.602 "timeout_admin_us": 0, 00:05:09.602 "keep_alive_timeout_ms": 10000, 00:05:09.602 "arbitration_burst": 0, 00:05:09.602 "low_priority_weight": 0, 00:05:09.602 "medium_priority_weight": 0, 00:05:09.602 "high_priority_weight": 0, 00:05:09.602 "nvme_adminq_poll_period_us": 10000, 00:05:09.602 "nvme_ioq_poll_period_us": 0, 00:05:09.602 "io_queue_requests": 0, 00:05:09.602 "delay_cmd_submit": true, 00:05:09.602 "transport_retry_count": 4, 00:05:09.602 "bdev_retry_count": 3, 00:05:09.602 "transport_ack_timeout": 0, 00:05:09.602 "ctrlr_loss_timeout_sec": 0, 00:05:09.602 "reconnect_delay_sec": 0, 00:05:09.602 "fast_io_fail_timeout_sec": 0, 00:05:09.602 "disable_auto_failback": false, 00:05:09.602 "generate_uuids": false, 00:05:09.602 "transport_tos": 0, 00:05:09.602 "nvme_error_stat": false, 00:05:09.602 "rdma_srq_size": 0, 00:05:09.602 "io_path_stat": false, 00:05:09.602 "allow_accel_sequence": false, 00:05:09.602 "rdma_max_cq_size": 0, 00:05:09.602 "rdma_cm_event_timeout_ms": 0, 00:05:09.602 "dhchap_digests": [ 00:05:09.602 "sha256", 00:05:09.602 "sha384", 00:05:09.602 "sha512" 00:05:09.602 ], 00:05:09.602 "dhchap_dhgroups": [ 00:05:09.602 "null", 00:05:09.602 "ffdhe2048", 00:05:09.602 "ffdhe3072", 00:05:09.602 "ffdhe4096", 00:05:09.602 "ffdhe6144", 00:05:09.602 "ffdhe8192" 00:05:09.602 ] 00:05:09.602 } 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "method": "bdev_nvme_set_hotplug", 00:05:09.602 "params": { 00:05:09.602 "period_us": 100000, 00:05:09.602 "enable": false 00:05:09.602 } 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "method": "bdev_wait_for_examine" 00:05:09.602 } 00:05:09.602 ] 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "subsystem": "scsi", 00:05:09.602 "config": null 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "subsystem": "scheduler", 00:05:09.602 "config": [ 00:05:09.602 { 00:05:09.602 "method": "framework_set_scheduler", 00:05:09.602 "params": { 00:05:09.602 "name": "static" 00:05:09.602 } 00:05:09.602 } 00:05:09.602 ] 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "subsystem": "vhost_scsi", 00:05:09.602 "config": [] 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "subsystem": "vhost_blk", 00:05:09.602 "config": [] 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "subsystem": "ublk", 00:05:09.602 "config": [] 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "subsystem": "nbd", 00:05:09.602 "config": [] 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "subsystem": "nvmf", 00:05:09.602 "config": [ 00:05:09.602 { 00:05:09.602 "method": "nvmf_set_config", 00:05:09.602 "params": { 00:05:09.602 "discovery_filter": "match_any", 00:05:09.602 "admin_cmd_passthru": { 00:05:09.602 "identify_ctrlr": false 00:05:09.602 } 00:05:09.602 } 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "method": "nvmf_set_max_subsystems", 00:05:09.602 "params": { 00:05:09.602 "max_subsystems": 1024 00:05:09.602 } 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "method": "nvmf_set_crdt", 00:05:09.602 "params": { 00:05:09.602 "crdt1": 0, 00:05:09.602 "crdt2": 0, 00:05:09.602 "crdt3": 0 00:05:09.602 } 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "method": "nvmf_create_transport", 00:05:09.602 "params": { 00:05:09.602 "trtype": "TCP", 00:05:09.602 "max_queue_depth": 128, 00:05:09.602 "max_io_qpairs_per_ctrlr": 127, 00:05:09.602 "in_capsule_data_size": 4096, 00:05:09.602 "max_io_size": 131072, 00:05:09.602 "io_unit_size": 131072, 00:05:09.602 "max_aq_depth": 128, 00:05:09.602 "num_shared_buffers": 511, 00:05:09.602 "buf_cache_size": 4294967295, 00:05:09.602 "dif_insert_or_strip": false, 00:05:09.602 "zcopy": false, 00:05:09.602 "c2h_success": true, 00:05:09.602 "sock_priority": 0, 00:05:09.602 "abort_timeout_sec": 1, 00:05:09.602 "ack_timeout": 0, 00:05:09.602 "data_wr_pool_size": 0 00:05:09.602 } 00:05:09.602 } 00:05:09.602 ] 00:05:09.602 }, 00:05:09.602 { 00:05:09.602 "subsystem": "iscsi", 00:05:09.602 "config": [ 00:05:09.602 { 00:05:09.602 "method": "iscsi_set_options", 00:05:09.602 "params": { 00:05:09.602 "node_base": "iqn.2016-06.io.spdk", 00:05:09.602 "max_sessions": 128, 00:05:09.602 "max_connections_per_session": 2, 00:05:09.602 "max_queue_depth": 64, 00:05:09.602 "default_time2wait": 2, 00:05:09.602 "default_time2retain": 20, 00:05:09.602 "first_burst_length": 8192, 00:05:09.602 "immediate_data": true, 00:05:09.602 "allow_duplicated_isid": false, 00:05:09.602 "error_recovery_level": 0, 00:05:09.602 "nop_timeout": 60, 00:05:09.602 "nop_in_interval": 30, 00:05:09.602 "disable_chap": false, 00:05:09.602 "require_chap": false, 00:05:09.602 "mutual_chap": false, 00:05:09.602 "chap_group": 0, 00:05:09.602 "max_large_datain_per_connection": 64, 00:05:09.602 "max_r2t_per_connection": 4, 00:05:09.602 "pdu_pool_size": 36864, 00:05:09.602 "immediate_data_pool_size": 16384, 00:05:09.602 "data_out_pool_size": 2048 00:05:09.602 } 00:05:09.602 } 00:05:09.602 ] 00:05:09.602 } 00:05:09.602 ] 00:05:09.602 } 00:05:09.602 09:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:09.602 09:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 889978 00:05:09.602 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 889978 ']' 00:05:09.602 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 889978 00:05:09.602 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:09.602 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:09.602 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 889978 00:05:09.602 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:09.602 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:09.602 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 889978' 00:05:09.602 killing process with pid 889978 00:05:09.602 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 889978 00:05:09.602 09:19:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 889978 00:05:09.863 09:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=890316 00:05:09.863 09:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:09.863 09:19:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 890316 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 890316 ']' 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 890316 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 890316 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 890316' 00:05:15.154 killing process with pid 890316 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 890316 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 890316 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:15.154 00:05:15.154 real 0m6.631s 00:05:15.154 user 0m6.578s 00:05:15.154 sys 0m0.553s 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.154 ************************************ 00:05:15.154 END TEST skip_rpc_with_json 00:05:15.154 ************************************ 00:05:15.154 09:19:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:15.154 09:19:46 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:15.154 09:19:46 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:15.154 09:19:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.154 ************************************ 00:05:15.154 START TEST skip_rpc_with_delay 00:05:15.154 ************************************ 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:15.154 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:15.415 [2024-06-11 09:19:46.985501] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:15.415 [2024-06-11 09:19:46.985578] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:15.415 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:15.415 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:15.415 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:15.415 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:15.415 00:05:15.415 real 0m0.071s 00:05:15.415 user 0m0.041s 00:05:15.415 sys 0m0.029s 00:05:15.415 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:15.415 09:19:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:15.415 ************************************ 00:05:15.415 END TEST skip_rpc_with_delay 00:05:15.415 ************************************ 00:05:15.415 09:19:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:15.415 09:19:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:15.415 09:19:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:15.415 09:19:47 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:15.415 09:19:47 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:15.415 09:19:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.415 ************************************ 00:05:15.415 START TEST exit_on_failed_rpc_init 00:05:15.415 ************************************ 00:05:15.415 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:05:15.415 09:19:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=891381 00:05:15.415 09:19:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 891381 00:05:15.415 09:19:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.415 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 891381 ']' 00:05:15.415 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.415 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:15.415 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.415 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:15.415 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.415 [2024-06-11 09:19:47.135168] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:15.415 [2024-06-11 09:19:47.135227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891381 ] 00:05:15.415 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.415 [2024-06-11 09:19:47.215205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.676 [2024-06-11 09:19:47.286819] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.246 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:16.246 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:05:16.246 09:19:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.246 09:19:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:16.246 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:16.246 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:16.246 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.246 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:16.246 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.246 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:16.246 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.246 09:19:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:16.246 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.246 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:16.246 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:16.246 [2024-06-11 09:19:48.058298] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:16.246 [2024-06-11 09:19:48.058355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891577 ] 00:05:16.506 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.506 [2024-06-11 09:19:48.115143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.506 [2024-06-11 09:19:48.179284] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.506 [2024-06-11 09:19:48.179348] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:16.506 [2024-06-11 09:19:48.179358] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:16.506 [2024-06-11 09:19:48.179365] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 891381 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 891381 ']' 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 891381 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 891381 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 891381' 00:05:16.506 killing process with pid 891381 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 891381 00:05:16.506 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 891381 00:05:16.765 00:05:16.765 real 0m1.424s 00:05:16.765 user 0m1.725s 00:05:16.765 sys 0m0.379s 00:05:16.765 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:16.765 09:19:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:16.765 ************************************ 00:05:16.765 END TEST exit_on_failed_rpc_init 00:05:16.765 ************************************ 00:05:16.766 09:19:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:16.766 00:05:16.766 real 0m13.827s 00:05:16.766 user 0m13.539s 00:05:16.766 sys 0m1.526s 00:05:16.766 09:19:48 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:16.766 09:19:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.766 ************************************ 00:05:16.766 END TEST skip_rpc 00:05:16.766 ************************************ 00:05:17.026 09:19:48 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:17.026 09:19:48 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:17.026 09:19:48 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:17.026 09:19:48 -- common/autotest_common.sh@10 -- # set +x 00:05:17.026 ************************************ 00:05:17.026 START TEST rpc_client 00:05:17.026 ************************************ 00:05:17.026 09:19:48 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:17.026 * Looking for test storage... 00:05:17.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:17.026 09:19:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:17.026 OK 00:05:17.026 09:19:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:17.026 00:05:17.026 real 0m0.129s 00:05:17.026 user 0m0.054s 00:05:17.026 sys 0m0.083s 00:05:17.026 09:19:48 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:17.026 09:19:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:17.026 ************************************ 00:05:17.026 END TEST rpc_client 00:05:17.026 ************************************ 00:05:17.026 09:19:48 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:17.026 09:19:48 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:17.026 09:19:48 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:17.026 09:19:48 -- common/autotest_common.sh@10 -- # set +x 00:05:17.026 ************************************ 00:05:17.026 START TEST json_config 00:05:17.026 ************************************ 00:05:17.026 09:19:48 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:17.288 09:19:48 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.288 09:19:48 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.288 09:19:48 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.288 09:19:48 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.288 09:19:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.288 09:19:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.288 09:19:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.288 09:19:48 json_config -- paths/export.sh@5 -- # export PATH 00:05:17.288 09:19:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.288 09:19:48 json_config -- nvmf/common.sh@47 -- # : 0 00:05:17.289 09:19:48 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:17.289 09:19:48 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:17.289 09:19:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.289 09:19:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.289 09:19:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.289 09:19:48 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:17.289 09:19:48 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:17.289 09:19:48 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:17.289 INFO: JSON configuration test init 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:17.289 09:19:48 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:17.289 09:19:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:17.289 09:19:48 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:17.289 09:19:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.289 09:19:48 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:17.289 09:19:48 json_config -- json_config/common.sh@9 -- # local app=target 00:05:17.289 09:19:48 json_config -- json_config/common.sh@10 -- # shift 00:05:17.289 09:19:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:17.289 09:19:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:17.289 09:19:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:17.289 09:19:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.289 09:19:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.289 09:19:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=891836 00:05:17.289 09:19:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:17.289 Waiting for target to run... 00:05:17.289 09:19:48 json_config -- json_config/common.sh@25 -- # waitforlisten 891836 /var/tmp/spdk_tgt.sock 00:05:17.289 09:19:48 json_config -- common/autotest_common.sh@830 -- # '[' -z 891836 ']' 00:05:17.289 09:19:48 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.289 09:19:48 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:17.289 09:19:48 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:17.289 09:19:48 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.289 09:19:48 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:17.289 09:19:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.289 [2024-06-11 09:19:49.013162] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:17.289 [2024-06-11 09:19:49.013221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891836 ] 00:05:17.289 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.549 [2024-06-11 09:19:49.283296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.549 [2024-06-11 09:19:49.335566] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.120 09:19:49 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:18.120 09:19:49 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:18.120 09:19:49 json_config -- json_config/common.sh@26 -- # echo '' 00:05:18.120 00:05:18.120 09:19:49 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:18.120 09:19:49 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:18.120 09:19:49 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:18.120 09:19:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.120 09:19:49 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:18.120 09:19:49 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:18.120 09:19:49 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:18.120 09:19:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.120 09:19:49 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:18.120 09:19:49 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:18.120 09:19:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:18.690 09:19:50 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:18.690 09:19:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:18.690 09:19:50 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:18.690 09:19:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.690 09:19:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:18.690 09:19:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:18.690 09:19:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:18.690 09:19:50 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:18.690 09:19:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:18.690 09:19:50 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:18.950 09:19:50 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:18.950 09:19:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:18.950 09:19:50 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:18.950 09:19:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:18.950 09:19:50 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:18.950 09:19:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:19.210 MallocForNvmf0 00:05:19.210 09:19:50 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:19.210 09:19:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:19.471 MallocForNvmf1 00:05:19.471 09:19:51 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:19.471 09:19:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:19.732 [2024-06-11 09:19:51.342224] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.732 09:19:51 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.732 09:19:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.732 09:19:51 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.732 09:19:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.992 09:19:51 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.992 09:19:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:20.252 09:19:51 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:20.252 09:19:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:20.252 [2024-06-11 09:19:51.984323] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:20.252 09:19:51 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:20.252 09:19:51 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:20.252 09:19:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.252 09:19:52 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:20.253 09:19:52 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:20.253 09:19:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.512 09:19:52 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:20.512 09:19:52 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:20.512 09:19:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:20.512 MallocBdevForConfigChangeCheck 00:05:20.512 09:19:52 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:20.512 09:19:52 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:20.512 09:19:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.512 09:19:52 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:20.512 09:19:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.082 09:19:52 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:21.082 INFO: shutting down applications... 00:05:21.082 09:19:52 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:21.082 09:19:52 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:21.082 09:19:52 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:21.082 09:19:52 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:21.342 Calling clear_iscsi_subsystem 00:05:21.342 Calling clear_nvmf_subsystem 00:05:21.342 Calling clear_nbd_subsystem 00:05:21.342 Calling clear_ublk_subsystem 00:05:21.342 Calling clear_vhost_blk_subsystem 00:05:21.342 Calling clear_vhost_scsi_subsystem 00:05:21.342 Calling clear_bdev_subsystem 00:05:21.342 09:19:53 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:21.342 09:19:53 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:21.342 09:19:53 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:21.342 09:19:53 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:21.342 09:19:53 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.342 09:19:53 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:21.603 09:19:53 json_config -- json_config/json_config.sh@345 -- # break 00:05:21.604 09:19:53 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:21.604 09:19:53 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:21.604 09:19:53 json_config -- json_config/common.sh@31 -- # local app=target 00:05:21.604 09:19:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:21.604 09:19:53 json_config -- json_config/common.sh@35 -- # [[ -n 891836 ]] 00:05:21.604 09:19:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 891836 00:05:21.604 09:19:53 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:21.604 09:19:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.604 09:19:53 json_config -- json_config/common.sh@41 -- # kill -0 891836 00:05:21.604 09:19:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.175 09:19:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.175 09:19:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.175 09:19:53 json_config -- json_config/common.sh@41 -- # kill -0 891836 00:05:22.175 09:19:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:22.175 09:19:53 json_config -- json_config/common.sh@43 -- # break 00:05:22.175 09:19:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:22.175 09:19:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:22.175 SPDK target shutdown done 00:05:22.175 09:19:53 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:22.175 INFO: relaunching applications... 00:05:22.175 09:19:53 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.175 09:19:53 json_config -- json_config/common.sh@9 -- # local app=target 00:05:22.175 09:19:53 json_config -- json_config/common.sh@10 -- # shift 00:05:22.175 09:19:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:22.175 09:19:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:22.175 09:19:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:22.175 09:19:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.175 09:19:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.175 09:19:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=892972 00:05:22.175 09:19:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:22.175 Waiting for target to run... 00:05:22.175 09:19:53 json_config -- json_config/common.sh@25 -- # waitforlisten 892972 /var/tmp/spdk_tgt.sock 00:05:22.175 09:19:53 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.175 09:19:53 json_config -- common/autotest_common.sh@830 -- # '[' -z 892972 ']' 00:05:22.175 09:19:53 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:22.175 09:19:53 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:22.175 09:19:53 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:22.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:22.175 09:19:53 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:22.175 09:19:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.175 [2024-06-11 09:19:53.901338] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:22.175 [2024-06-11 09:19:53.901395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892972 ] 00:05:22.175 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.436 [2024-06-11 09:19:54.192365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.436 [2024-06-11 09:19:54.246181] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.007 [2024-06-11 09:19:54.743901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:23.007 [2024-06-11 09:19:54.776255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:23.007 09:19:54 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:23.007 09:19:54 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:23.007 09:19:54 json_config -- json_config/common.sh@26 -- # echo '' 00:05:23.007 00:05:23.007 09:19:54 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:23.007 09:19:54 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:23.007 INFO: Checking if target configuration is the same... 00:05:23.007 09:19:54 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.007 09:19:54 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:23.007 09:19:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.007 + '[' 2 -ne 2 ']' 00:05:23.007 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:23.007 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:23.007 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:23.007 +++ basename /dev/fd/62 00:05:23.281 ++ mktemp /tmp/62.XXX 00:05:23.281 + tmp_file_1=/tmp/62.Syu 00:05:23.281 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.281 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.281 + tmp_file_2=/tmp/spdk_tgt_config.json.o3l 00:05:23.281 + ret=0 00:05:23.281 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.281 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.599 + diff -u /tmp/62.Syu /tmp/spdk_tgt_config.json.o3l 00:05:23.599 + echo 'INFO: JSON config files are the same' 00:05:23.599 INFO: JSON config files are the same 00:05:23.599 + rm /tmp/62.Syu /tmp/spdk_tgt_config.json.o3l 00:05:23.599 + exit 0 00:05:23.599 09:19:55 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:23.599 09:19:55 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:23.599 INFO: changing configuration and checking if this can be detected... 00:05:23.599 09:19:55 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.599 09:19:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.599 09:19:55 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:23.599 09:19:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.599 09:19:55 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.599 + '[' 2 -ne 2 ']' 00:05:23.599 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:23.599 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:23.599 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:23.599 +++ basename /dev/fd/62 00:05:23.599 ++ mktemp /tmp/62.XXX 00:05:23.599 + tmp_file_1=/tmp/62.obh 00:05:23.599 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.599 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.599 + tmp_file_2=/tmp/spdk_tgt_config.json.13D 00:05:23.599 + ret=0 00:05:23.599 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.859 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.120 + diff -u /tmp/62.obh /tmp/spdk_tgt_config.json.13D 00:05:24.120 + ret=1 00:05:24.120 + echo '=== Start of file: /tmp/62.obh ===' 00:05:24.120 + cat /tmp/62.obh 00:05:24.120 + echo '=== End of file: /tmp/62.obh ===' 00:05:24.120 + echo '' 00:05:24.120 + echo '=== Start of file: /tmp/spdk_tgt_config.json.13D ===' 00:05:24.120 + cat /tmp/spdk_tgt_config.json.13D 00:05:24.120 + echo '=== End of file: /tmp/spdk_tgt_config.json.13D ===' 00:05:24.120 + echo '' 00:05:24.120 + rm /tmp/62.obh /tmp/spdk_tgt_config.json.13D 00:05:24.120 + exit 1 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:24.120 INFO: configuration change detected. 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@317 -- # [[ -n 892972 ]] 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.120 09:19:55 json_config -- json_config/json_config.sh@323 -- # killprocess 892972 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@949 -- # '[' -z 892972 ']' 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@953 -- # kill -0 892972 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@954 -- # uname 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 892972 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 892972' 00:05:24.120 killing process with pid 892972 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@968 -- # kill 892972 00:05:24.120 09:19:55 json_config -- common/autotest_common.sh@973 -- # wait 892972 00:05:24.380 09:19:56 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.380 09:19:56 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:24.380 09:19:56 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:24.380 09:19:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.380 09:19:56 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:24.380 09:19:56 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:24.380 INFO: Success 00:05:24.380 00:05:24.380 real 0m7.302s 00:05:24.380 user 0m9.117s 00:05:24.380 sys 0m1.699s 00:05:24.380 09:19:56 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:24.380 09:19:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.380 ************************************ 00:05:24.381 END TEST json_config 00:05:24.381 ************************************ 00:05:24.381 09:19:56 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:24.381 09:19:56 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:24.381 09:19:56 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:24.381 09:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:24.643 ************************************ 00:05:24.643 START TEST json_config_extra_key 00:05:24.643 ************************************ 00:05:24.643 09:19:56 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:24.643 09:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.643 09:19:56 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.643 09:19:56 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.643 09:19:56 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.643 09:19:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.643 09:19:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.643 09:19:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.643 09:19:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:24.643 09:19:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:24.643 09:19:56 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:24.643 09:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:24.643 09:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:24.643 09:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:24.643 09:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:24.643 09:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:24.643 09:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:24.643 09:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:24.643 09:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:24.643 09:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:24.643 09:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.643 09:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:24.643 INFO: launching applications... 00:05:24.643 09:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:24.643 09:19:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:24.643 09:19:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:24.643 09:19:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.643 09:19:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.643 09:19:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.643 09:19:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.643 09:19:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.643 09:19:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=893606 00:05:24.643 09:19:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.643 Waiting for target to run... 00:05:24.643 09:19:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 893606 /var/tmp/spdk_tgt.sock 00:05:24.643 09:19:56 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 893606 ']' 00:05:24.643 09:19:56 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.643 09:19:56 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:24.643 09:19:56 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:24.644 09:19:56 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.644 09:19:56 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:24.644 09:19:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:24.644 [2024-06-11 09:19:56.366813] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:24.644 [2024-06-11 09:19:56.366883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893606 ] 00:05:24.644 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.215 [2024-06-11 09:19:56.732042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.215 [2024-06-11 09:19:56.785219] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.475 09:19:57 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:25.475 09:19:57 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:05:25.475 09:19:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:25.475 00:05:25.475 09:19:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:25.475 INFO: shutting down applications... 00:05:25.475 09:19:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:25.475 09:19:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:25.475 09:19:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:25.475 09:19:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 893606 ]] 00:05:25.475 09:19:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 893606 00:05:25.475 09:19:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:25.475 09:19:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.475 09:19:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 893606 00:05:25.475 09:19:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.046 09:19:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.046 09:19:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.046 09:19:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 893606 00:05:26.046 09:19:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:26.046 09:19:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:26.046 09:19:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:26.046 09:19:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:26.046 SPDK target shutdown done 00:05:26.046 09:19:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:26.046 Success 00:05:26.047 00:05:26.047 real 0m1.540s 00:05:26.047 user 0m1.190s 00:05:26.047 sys 0m0.464s 00:05:26.047 09:19:57 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:26.047 09:19:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:26.047 ************************************ 00:05:26.047 END TEST json_config_extra_key 00:05:26.047 ************************************ 00:05:26.047 09:19:57 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:26.047 09:19:57 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:26.047 09:19:57 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:26.047 09:19:57 -- common/autotest_common.sh@10 -- # set +x 00:05:26.047 ************************************ 00:05:26.047 START TEST alias_rpc 00:05:26.047 ************************************ 00:05:26.047 09:19:57 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:26.308 * Looking for test storage... 00:05:26.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:26.308 09:19:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:26.308 09:19:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=893950 00:05:26.308 09:19:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 893950 00:05:26.308 09:19:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.308 09:19:57 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 893950 ']' 00:05:26.308 09:19:57 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.308 09:19:57 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:26.308 09:19:57 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.308 09:19:57 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:26.308 09:19:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.308 [2024-06-11 09:19:57.993980] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:26.308 [2024-06-11 09:19:57.994048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893950 ] 00:05:26.308 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.308 [2024-06-11 09:19:58.076413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.569 [2024-06-11 09:19:58.148190] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.140 09:19:58 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:27.140 09:19:58 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:27.140 09:19:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:27.400 09:19:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 893950 00:05:27.400 09:19:59 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 893950 ']' 00:05:27.400 09:19:59 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 893950 00:05:27.400 09:19:59 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:05:27.400 09:19:59 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:27.400 09:19:59 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 893950 00:05:27.400 09:19:59 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:27.400 09:19:59 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:27.400 09:19:59 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 893950' 00:05:27.400 killing process with pid 893950 00:05:27.400 09:19:59 alias_rpc -- common/autotest_common.sh@968 -- # kill 893950 00:05:27.400 09:19:59 alias_rpc -- common/autotest_common.sh@973 -- # wait 893950 00:05:27.661 00:05:27.661 real 0m1.506s 00:05:27.661 user 0m1.741s 00:05:27.661 sys 0m0.398s 00:05:27.661 09:19:59 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:27.661 09:19:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.661 ************************************ 00:05:27.661 END TEST alias_rpc 00:05:27.661 ************************************ 00:05:27.661 09:19:59 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:27.661 09:19:59 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:27.661 09:19:59 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:27.661 09:19:59 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:27.661 09:19:59 -- common/autotest_common.sh@10 -- # set +x 00:05:27.661 ************************************ 00:05:27.661 START TEST spdkcli_tcp 00:05:27.661 ************************************ 00:05:27.661 09:19:59 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:27.922 * Looking for test storage... 00:05:27.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:27.922 09:19:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:27.922 09:19:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:27.922 09:19:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:27.922 09:19:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:27.922 09:19:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:27.922 09:19:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:27.922 09:19:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:27.922 09:19:59 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:27.922 09:19:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.922 09:19:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=894270 00:05:27.922 09:19:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 894270 00:05:27.922 09:19:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:27.922 09:19:59 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 894270 ']' 00:05:27.922 09:19:59 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.922 09:19:59 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:27.922 09:19:59 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.922 09:19:59 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:27.922 09:19:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.922 [2024-06-11 09:19:59.568225] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:27.922 [2024-06-11 09:19:59.568297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894270 ] 00:05:27.922 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.922 [2024-06-11 09:19:59.647171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.922 [2024-06-11 09:19:59.719742] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.922 [2024-06-11 09:19:59.719748] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.866 09:20:00 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:28.866 09:20:00 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:05:28.866 09:20:00 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=894535 00:05:28.866 09:20:00 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:28.866 09:20:00 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:28.866 [ 00:05:28.866 "bdev_malloc_delete", 00:05:28.866 "bdev_malloc_create", 00:05:28.866 "bdev_null_resize", 00:05:28.866 "bdev_null_delete", 00:05:28.866 "bdev_null_create", 00:05:28.866 "bdev_nvme_cuse_unregister", 00:05:28.866 "bdev_nvme_cuse_register", 00:05:28.866 "bdev_opal_new_user", 00:05:28.866 "bdev_opal_set_lock_state", 00:05:28.866 "bdev_opal_delete", 00:05:28.866 "bdev_opal_get_info", 00:05:28.866 "bdev_opal_create", 00:05:28.866 "bdev_nvme_opal_revert", 00:05:28.866 "bdev_nvme_opal_init", 00:05:28.866 "bdev_nvme_send_cmd", 00:05:28.866 "bdev_nvme_get_path_iostat", 00:05:28.866 "bdev_nvme_get_mdns_discovery_info", 00:05:28.866 "bdev_nvme_stop_mdns_discovery", 00:05:28.866 "bdev_nvme_start_mdns_discovery", 00:05:28.866 "bdev_nvme_set_multipath_policy", 00:05:28.866 "bdev_nvme_set_preferred_path", 00:05:28.866 "bdev_nvme_get_io_paths", 00:05:28.866 "bdev_nvme_remove_error_injection", 00:05:28.866 "bdev_nvme_add_error_injection", 00:05:28.866 "bdev_nvme_get_discovery_info", 00:05:28.866 "bdev_nvme_stop_discovery", 00:05:28.866 "bdev_nvme_start_discovery", 00:05:28.866 "bdev_nvme_get_controller_health_info", 00:05:28.866 "bdev_nvme_disable_controller", 00:05:28.866 "bdev_nvme_enable_controller", 00:05:28.866 "bdev_nvme_reset_controller", 00:05:28.866 "bdev_nvme_get_transport_statistics", 00:05:28.866 "bdev_nvme_apply_firmware", 00:05:28.866 "bdev_nvme_detach_controller", 00:05:28.866 "bdev_nvme_get_controllers", 00:05:28.867 "bdev_nvme_attach_controller", 00:05:28.867 "bdev_nvme_set_hotplug", 00:05:28.867 "bdev_nvme_set_options", 00:05:28.867 "bdev_passthru_delete", 00:05:28.867 "bdev_passthru_create", 00:05:28.867 "bdev_lvol_set_parent_bdev", 00:05:28.867 "bdev_lvol_set_parent", 00:05:28.867 "bdev_lvol_check_shallow_copy", 00:05:28.867 "bdev_lvol_start_shallow_copy", 00:05:28.867 "bdev_lvol_grow_lvstore", 00:05:28.867 "bdev_lvol_get_lvols", 00:05:28.867 "bdev_lvol_get_lvstores", 00:05:28.867 "bdev_lvol_delete", 00:05:28.867 "bdev_lvol_set_read_only", 00:05:28.867 "bdev_lvol_resize", 00:05:28.867 "bdev_lvol_decouple_parent", 00:05:28.867 "bdev_lvol_inflate", 00:05:28.867 "bdev_lvol_rename", 00:05:28.867 "bdev_lvol_clone_bdev", 00:05:28.867 "bdev_lvol_clone", 00:05:28.867 "bdev_lvol_snapshot", 00:05:28.867 "bdev_lvol_create", 00:05:28.867 "bdev_lvol_delete_lvstore", 00:05:28.867 "bdev_lvol_rename_lvstore", 00:05:28.867 "bdev_lvol_create_lvstore", 00:05:28.867 "bdev_raid_set_options", 00:05:28.867 "bdev_raid_remove_base_bdev", 00:05:28.867 "bdev_raid_add_base_bdev", 00:05:28.867 "bdev_raid_delete", 00:05:28.867 "bdev_raid_create", 00:05:28.867 "bdev_raid_get_bdevs", 00:05:28.867 "bdev_error_inject_error", 00:05:28.867 "bdev_error_delete", 00:05:28.867 "bdev_error_create", 00:05:28.867 "bdev_split_delete", 00:05:28.867 "bdev_split_create", 00:05:28.867 "bdev_delay_delete", 00:05:28.867 "bdev_delay_create", 00:05:28.867 "bdev_delay_update_latency", 00:05:28.867 "bdev_zone_block_delete", 00:05:28.867 "bdev_zone_block_create", 00:05:28.867 "blobfs_create", 00:05:28.867 "blobfs_detect", 00:05:28.867 "blobfs_set_cache_size", 00:05:28.867 "bdev_aio_delete", 00:05:28.867 "bdev_aio_rescan", 00:05:28.867 "bdev_aio_create", 00:05:28.867 "bdev_ftl_set_property", 00:05:28.867 "bdev_ftl_get_properties", 00:05:28.867 "bdev_ftl_get_stats", 00:05:28.867 "bdev_ftl_unmap", 00:05:28.867 "bdev_ftl_unload", 00:05:28.867 "bdev_ftl_delete", 00:05:28.867 "bdev_ftl_load", 00:05:28.867 "bdev_ftl_create", 00:05:28.867 "bdev_virtio_attach_controller", 00:05:28.867 "bdev_virtio_scsi_get_devices", 00:05:28.867 "bdev_virtio_detach_controller", 00:05:28.867 "bdev_virtio_blk_set_hotplug", 00:05:28.867 "bdev_iscsi_delete", 00:05:28.867 "bdev_iscsi_create", 00:05:28.867 "bdev_iscsi_set_options", 00:05:28.867 "accel_error_inject_error", 00:05:28.867 "ioat_scan_accel_module", 00:05:28.867 "dsa_scan_accel_module", 00:05:28.867 "iaa_scan_accel_module", 00:05:28.867 "vfu_virtio_create_scsi_endpoint", 00:05:28.867 "vfu_virtio_scsi_remove_target", 00:05:28.867 "vfu_virtio_scsi_add_target", 00:05:28.867 "vfu_virtio_create_blk_endpoint", 00:05:28.867 "vfu_virtio_delete_endpoint", 00:05:28.867 "keyring_file_remove_key", 00:05:28.867 "keyring_file_add_key", 00:05:28.867 "keyring_linux_set_options", 00:05:28.867 "iscsi_get_histogram", 00:05:28.867 "iscsi_enable_histogram", 00:05:28.867 "iscsi_set_options", 00:05:28.867 "iscsi_get_auth_groups", 00:05:28.867 "iscsi_auth_group_remove_secret", 00:05:28.867 "iscsi_auth_group_add_secret", 00:05:28.867 "iscsi_delete_auth_group", 00:05:28.867 "iscsi_create_auth_group", 00:05:28.867 "iscsi_set_discovery_auth", 00:05:28.867 "iscsi_get_options", 00:05:28.867 "iscsi_target_node_request_logout", 00:05:28.867 "iscsi_target_node_set_redirect", 00:05:28.867 "iscsi_target_node_set_auth", 00:05:28.867 "iscsi_target_node_add_lun", 00:05:28.867 "iscsi_get_stats", 00:05:28.867 "iscsi_get_connections", 00:05:28.867 "iscsi_portal_group_set_auth", 00:05:28.867 "iscsi_start_portal_group", 00:05:28.867 "iscsi_delete_portal_group", 00:05:28.867 "iscsi_create_portal_group", 00:05:28.867 "iscsi_get_portal_groups", 00:05:28.867 "iscsi_delete_target_node", 00:05:28.867 "iscsi_target_node_remove_pg_ig_maps", 00:05:28.867 "iscsi_target_node_add_pg_ig_maps", 00:05:28.867 "iscsi_create_target_node", 00:05:28.867 "iscsi_get_target_nodes", 00:05:28.867 "iscsi_delete_initiator_group", 00:05:28.867 "iscsi_initiator_group_remove_initiators", 00:05:28.867 "iscsi_initiator_group_add_initiators", 00:05:28.867 "iscsi_create_initiator_group", 00:05:28.867 "iscsi_get_initiator_groups", 00:05:28.867 "nvmf_set_crdt", 00:05:28.867 "nvmf_set_config", 00:05:28.867 "nvmf_set_max_subsystems", 00:05:28.867 "nvmf_stop_mdns_prr", 00:05:28.867 "nvmf_publish_mdns_prr", 00:05:28.867 "nvmf_subsystem_get_listeners", 00:05:28.867 "nvmf_subsystem_get_qpairs", 00:05:28.867 "nvmf_subsystem_get_controllers", 00:05:28.867 "nvmf_get_stats", 00:05:28.867 "nvmf_get_transports", 00:05:28.867 "nvmf_create_transport", 00:05:28.867 "nvmf_get_targets", 00:05:28.867 "nvmf_delete_target", 00:05:28.867 "nvmf_create_target", 00:05:28.867 "nvmf_subsystem_allow_any_host", 00:05:28.867 "nvmf_subsystem_remove_host", 00:05:28.867 "nvmf_subsystem_add_host", 00:05:28.867 "nvmf_ns_remove_host", 00:05:28.867 "nvmf_ns_add_host", 00:05:28.867 "nvmf_subsystem_remove_ns", 00:05:28.867 "nvmf_subsystem_add_ns", 00:05:28.867 "nvmf_subsystem_listener_set_ana_state", 00:05:28.867 "nvmf_discovery_get_referrals", 00:05:28.867 "nvmf_discovery_remove_referral", 00:05:28.867 "nvmf_discovery_add_referral", 00:05:28.867 "nvmf_subsystem_remove_listener", 00:05:28.867 "nvmf_subsystem_add_listener", 00:05:28.867 "nvmf_delete_subsystem", 00:05:28.867 "nvmf_create_subsystem", 00:05:28.867 "nvmf_get_subsystems", 00:05:28.867 "env_dpdk_get_mem_stats", 00:05:28.867 "nbd_get_disks", 00:05:28.867 "nbd_stop_disk", 00:05:28.867 "nbd_start_disk", 00:05:28.867 "ublk_recover_disk", 00:05:28.867 "ublk_get_disks", 00:05:28.867 "ublk_stop_disk", 00:05:28.867 "ublk_start_disk", 00:05:28.867 "ublk_destroy_target", 00:05:28.867 "ublk_create_target", 00:05:28.867 "virtio_blk_create_transport", 00:05:28.867 "virtio_blk_get_transports", 00:05:28.867 "vhost_controller_set_coalescing", 00:05:28.867 "vhost_get_controllers", 00:05:28.867 "vhost_delete_controller", 00:05:28.867 "vhost_create_blk_controller", 00:05:28.867 "vhost_scsi_controller_remove_target", 00:05:28.867 "vhost_scsi_controller_add_target", 00:05:28.867 "vhost_start_scsi_controller", 00:05:28.867 "vhost_create_scsi_controller", 00:05:28.867 "thread_set_cpumask", 00:05:28.867 "framework_get_scheduler", 00:05:28.867 "framework_set_scheduler", 00:05:28.867 "framework_get_reactors", 00:05:28.867 "thread_get_io_channels", 00:05:28.867 "thread_get_pollers", 00:05:28.867 "thread_get_stats", 00:05:28.867 "framework_monitor_context_switch", 00:05:28.867 "spdk_kill_instance", 00:05:28.867 "log_enable_timestamps", 00:05:28.867 "log_get_flags", 00:05:28.867 "log_clear_flag", 00:05:28.867 "log_set_flag", 00:05:28.867 "log_get_level", 00:05:28.867 "log_set_level", 00:05:28.867 "log_get_print_level", 00:05:28.867 "log_set_print_level", 00:05:28.867 "framework_enable_cpumask_locks", 00:05:28.867 "framework_disable_cpumask_locks", 00:05:28.867 "framework_wait_init", 00:05:28.867 "framework_start_init", 00:05:28.867 "scsi_get_devices", 00:05:28.867 "bdev_get_histogram", 00:05:28.867 "bdev_enable_histogram", 00:05:28.867 "bdev_set_qos_limit", 00:05:28.867 "bdev_set_qd_sampling_period", 00:05:28.867 "bdev_get_bdevs", 00:05:28.867 "bdev_reset_iostat", 00:05:28.867 "bdev_get_iostat", 00:05:28.867 "bdev_examine", 00:05:28.867 "bdev_wait_for_examine", 00:05:28.867 "bdev_set_options", 00:05:28.867 "notify_get_notifications", 00:05:28.867 "notify_get_types", 00:05:28.867 "accel_get_stats", 00:05:28.867 "accel_set_options", 00:05:28.867 "accel_set_driver", 00:05:28.867 "accel_crypto_key_destroy", 00:05:28.867 "accel_crypto_keys_get", 00:05:28.867 "accel_crypto_key_create", 00:05:28.867 "accel_assign_opc", 00:05:28.867 "accel_get_module_info", 00:05:28.867 "accel_get_opc_assignments", 00:05:28.867 "vmd_rescan", 00:05:28.867 "vmd_remove_device", 00:05:28.867 "vmd_enable", 00:05:28.867 "sock_get_default_impl", 00:05:28.867 "sock_set_default_impl", 00:05:28.867 "sock_impl_set_options", 00:05:28.867 "sock_impl_get_options", 00:05:28.867 "iobuf_get_stats", 00:05:28.867 "iobuf_set_options", 00:05:28.867 "keyring_get_keys", 00:05:28.867 "framework_get_pci_devices", 00:05:28.867 "framework_get_config", 00:05:28.867 "framework_get_subsystems", 00:05:28.867 "vfu_tgt_set_base_path", 00:05:28.867 "trace_get_info", 00:05:28.867 "trace_get_tpoint_group_mask", 00:05:28.867 "trace_disable_tpoint_group", 00:05:28.867 "trace_enable_tpoint_group", 00:05:28.867 "trace_clear_tpoint_mask", 00:05:28.867 "trace_set_tpoint_mask", 00:05:28.867 "spdk_get_version", 00:05:28.867 "rpc_get_methods" 00:05:28.867 ] 00:05:28.867 09:20:00 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:28.867 09:20:00 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:28.867 09:20:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.128 09:20:00 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:29.128 09:20:00 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 894270 00:05:29.128 09:20:00 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 894270 ']' 00:05:29.128 09:20:00 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 894270 00:05:29.128 09:20:00 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:05:29.128 09:20:00 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:29.128 09:20:00 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 894270 00:05:29.128 09:20:00 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:29.128 09:20:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:29.128 09:20:00 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 894270' 00:05:29.128 killing process with pid 894270 00:05:29.128 09:20:00 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 894270 00:05:29.128 09:20:00 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 894270 00:05:29.389 00:05:29.389 real 0m1.548s 00:05:29.389 user 0m2.978s 00:05:29.389 sys 0m0.437s 00:05:29.389 09:20:00 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:29.389 09:20:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.389 ************************************ 00:05:29.389 END TEST spdkcli_tcp 00:05:29.389 ************************************ 00:05:29.389 09:20:00 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:29.389 09:20:00 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:29.389 09:20:00 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:29.389 09:20:00 -- common/autotest_common.sh@10 -- # set +x 00:05:29.389 ************************************ 00:05:29.389 START TEST dpdk_mem_utility 00:05:29.389 ************************************ 00:05:29.389 09:20:01 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:29.389 * Looking for test storage... 00:05:29.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:29.389 09:20:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:29.389 09:20:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=894647 00:05:29.389 09:20:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 894647 00:05:29.389 09:20:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.389 09:20:01 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 894647 ']' 00:05:29.389 09:20:01 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.389 09:20:01 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:29.389 09:20:01 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.389 09:20:01 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:29.389 09:20:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.389 [2024-06-11 09:20:01.183630] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:29.389 [2024-06-11 09:20:01.183684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894647 ] 00:05:29.650 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.650 [2024-06-11 09:20:01.258727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.650 [2024-06-11 09:20:01.323538] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.221 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:30.221 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:05:30.221 09:20:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:30.221 09:20:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:30.221 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:30.221 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.221 { 00:05:30.221 "filename": "/tmp/spdk_mem_dump.txt" 00:05:30.221 } 00:05:30.221 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.221 09:20:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:30.490 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:30.490 1 heaps totaling size 814.000000 MiB 00:05:30.490 size: 814.000000 MiB heap id: 0 00:05:30.490 end heaps---------- 00:05:30.490 8 mempools totaling size 598.116089 MiB 00:05:30.490 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:30.490 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:30.490 size: 84.521057 MiB name: bdev_io_894647 00:05:30.490 size: 51.011292 MiB name: evtpool_894647 00:05:30.490 size: 50.003479 MiB name: msgpool_894647 00:05:30.490 size: 21.763794 MiB name: PDU_Pool 00:05:30.490 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:30.490 size: 0.026123 MiB name: Session_Pool 00:05:30.490 end mempools------- 00:05:30.490 6 memzones totaling size 4.142822 MiB 00:05:30.490 size: 1.000366 MiB name: RG_ring_0_894647 00:05:30.490 size: 1.000366 MiB name: RG_ring_1_894647 00:05:30.490 size: 1.000366 MiB name: RG_ring_4_894647 00:05:30.490 size: 1.000366 MiB name: RG_ring_5_894647 00:05:30.490 size: 0.125366 MiB name: RG_ring_2_894647 00:05:30.490 size: 0.015991 MiB name: RG_ring_3_894647 00:05:30.490 end memzones------- 00:05:30.490 09:20:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:30.490 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:30.490 list of free elements. size: 12.519348 MiB 00:05:30.490 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:30.490 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:30.491 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:30.491 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:30.491 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:30.491 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:30.491 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:30.491 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:30.491 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:30.491 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:30.491 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:30.491 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:30.491 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:30.491 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:30.491 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:30.491 list of standard malloc elements. size: 199.218079 MiB 00:05:30.491 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:30.491 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:30.491 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:30.491 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:30.491 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:30.491 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:30.491 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:30.491 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:30.491 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:30.491 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:30.491 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:30.491 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:30.491 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:30.491 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:30.491 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:30.491 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:30.491 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:30.491 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:30.491 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:30.491 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:30.491 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:30.491 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:30.491 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:30.491 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:30.491 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:30.491 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:30.491 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:30.491 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:30.491 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:30.491 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:30.491 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:30.491 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:30.491 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:30.491 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:30.491 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:30.491 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:30.491 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:30.491 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:30.491 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:30.491 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:30.491 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:30.491 list of memzone associated elements. size: 602.262573 MiB 00:05:30.491 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:30.491 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:30.491 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:30.491 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:30.491 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:30.491 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_894647_0 00:05:30.491 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:30.491 associated memzone info: size: 48.002930 MiB name: MP_evtpool_894647_0 00:05:30.491 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:30.491 associated memzone info: size: 48.002930 MiB name: MP_msgpool_894647_0 00:05:30.491 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:30.491 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:30.491 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:30.491 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:30.491 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:30.491 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_894647 00:05:30.491 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:30.491 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_894647 00:05:30.491 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:30.491 associated memzone info: size: 1.007996 MiB name: MP_evtpool_894647 00:05:30.491 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:30.491 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:30.491 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:30.491 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:30.491 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:30.491 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:30.491 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:30.491 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:30.491 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:30.491 associated memzone info: size: 1.000366 MiB name: RG_ring_0_894647 00:05:30.491 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:30.491 associated memzone info: size: 1.000366 MiB name: RG_ring_1_894647 00:05:30.491 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:30.491 associated memzone info: size: 1.000366 MiB name: RG_ring_4_894647 00:05:30.491 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:30.491 associated memzone info: size: 1.000366 MiB name: RG_ring_5_894647 00:05:30.491 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:30.491 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_894647 00:05:30.491 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:30.491 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:30.491 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:30.491 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:30.491 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:30.491 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:30.491 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:30.491 associated memzone info: size: 0.125366 MiB name: RG_ring_2_894647 00:05:30.491 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:30.491 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:30.491 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:30.491 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:30.491 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:30.491 associated memzone info: size: 0.015991 MiB name: RG_ring_3_894647 00:05:30.491 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:30.491 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:30.491 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:30.491 associated memzone info: size: 0.000183 MiB name: MP_msgpool_894647 00:05:30.491 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:30.491 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_894647 00:05:30.491 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:30.491 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:30.491 09:20:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:30.491 09:20:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 894647 00:05:30.491 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 894647 ']' 00:05:30.491 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 894647 00:05:30.491 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:05:30.491 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:30.491 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 894647 00:05:30.491 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:30.491 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:30.491 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 894647' 00:05:30.491 killing process with pid 894647 00:05:30.491 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 894647 00:05:30.491 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 894647 00:05:30.759 00:05:30.759 real 0m1.386s 00:05:30.759 user 0m1.553s 00:05:30.759 sys 0m0.378s 00:05:30.759 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:30.759 09:20:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.759 ************************************ 00:05:30.759 END TEST dpdk_mem_utility 00:05:30.759 ************************************ 00:05:30.759 09:20:02 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:30.759 09:20:02 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:30.759 09:20:02 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:30.759 09:20:02 -- common/autotest_common.sh@10 -- # set +x 00:05:30.759 ************************************ 00:05:30.759 START TEST event 00:05:30.759 ************************************ 00:05:30.759 09:20:02 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:30.759 * Looking for test storage... 00:05:31.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:31.020 09:20:02 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:31.020 09:20:02 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:31.020 09:20:02 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.020 09:20:02 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:31.020 09:20:02 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:31.020 09:20:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.020 ************************************ 00:05:31.020 START TEST event_perf 00:05:31.020 ************************************ 00:05:31.020 09:20:02 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.020 Running I/O for 1 seconds...[2024-06-11 09:20:02.643493] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:31.020 [2024-06-11 09:20:02.643595] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894999 ] 00:05:31.020 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.020 [2024-06-11 09:20:02.727164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.020 [2024-06-11 09:20:02.811509] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.020 [2024-06-11 09:20:02.811783] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.020 [2024-06-11 09:20:02.811907] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.020 [2024-06-11 09:20:02.811907] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.404 Running I/O for 1 seconds... 00:05:32.404 lcore 0: 177258 00:05:32.404 lcore 1: 177261 00:05:32.404 lcore 2: 177259 00:05:32.404 lcore 3: 177262 00:05:32.404 done. 00:05:32.404 00:05:32.404 real 0m1.244s 00:05:32.404 user 0m4.144s 00:05:32.404 sys 0m0.095s 00:05:32.404 09:20:03 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:32.404 09:20:03 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.404 ************************************ 00:05:32.404 END TEST event_perf 00:05:32.404 ************************************ 00:05:32.404 09:20:03 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:32.404 09:20:03 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:32.404 09:20:03 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:32.404 09:20:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.404 ************************************ 00:05:32.404 START TEST event_reactor 00:05:32.404 ************************************ 00:05:32.404 09:20:03 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:32.404 [2024-06-11 09:20:03.963226] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:32.404 [2024-06-11 09:20:03.963456] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid895356 ] 00:05:32.404 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.404 [2024-06-11 09:20:04.052048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.404 [2024-06-11 09:20:04.120953] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.788 test_start 00:05:33.788 oneshot 00:05:33.788 tick 100 00:05:33.788 tick 100 00:05:33.788 tick 250 00:05:33.788 tick 100 00:05:33.788 tick 100 00:05:33.788 tick 250 00:05:33.788 tick 100 00:05:33.788 tick 500 00:05:33.788 tick 100 00:05:33.788 tick 100 00:05:33.788 tick 250 00:05:33.788 tick 100 00:05:33.788 tick 100 00:05:33.788 test_end 00:05:33.788 00:05:33.788 real 0m1.230s 00:05:33.788 user 0m1.136s 00:05:33.788 sys 0m0.089s 00:05:33.788 09:20:05 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:33.788 09:20:05 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:33.788 ************************************ 00:05:33.788 END TEST event_reactor 00:05:33.788 ************************************ 00:05:33.788 09:20:05 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:33.788 09:20:05 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:33.788 09:20:05 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:33.788 09:20:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.788 ************************************ 00:05:33.788 START TEST event_reactor_perf 00:05:33.788 ************************************ 00:05:33.788 09:20:05 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:33.788 [2024-06-11 09:20:05.271247] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:33.788 [2024-06-11 09:20:05.271354] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid895706 ] 00:05:33.788 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.788 [2024-06-11 09:20:05.349813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.788 [2024-06-11 09:20:05.419721] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.729 test_start 00:05:34.729 test_end 00:05:34.730 Performance: 370942 events per second 00:05:34.730 00:05:34.730 real 0m1.221s 00:05:34.730 user 0m1.136s 00:05:34.730 sys 0m0.081s 00:05:34.730 09:20:06 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:34.730 09:20:06 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.730 ************************************ 00:05:34.730 END TEST event_reactor_perf 00:05:34.730 ************************************ 00:05:34.730 09:20:06 event -- event/event.sh@49 -- # uname -s 00:05:34.730 09:20:06 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:34.730 09:20:06 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:34.730 09:20:06 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:34.730 09:20:06 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:34.730 09:20:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.990 ************************************ 00:05:34.990 START TEST event_scheduler 00:05:34.990 ************************************ 00:05:34.990 09:20:06 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:34.990 * Looking for test storage... 00:05:34.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:34.990 09:20:06 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:34.990 09:20:06 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=896051 00:05:34.990 09:20:06 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.990 09:20:06 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:34.990 09:20:06 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 896051 00:05:34.990 09:20:06 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 896051 ']' 00:05:34.990 09:20:06 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.990 09:20:06 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:34.990 09:20:06 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.990 09:20:06 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:34.990 09:20:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.990 [2024-06-11 09:20:06.710471] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:34.990 [2024-06-11 09:20:06.710538] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid896051 ] 00:05:34.990 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.990 [2024-06-11 09:20:06.769535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:35.251 [2024-06-11 09:20:06.828925] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.251 [2024-06-11 09:20:06.829041] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.251 [2024-06-11 09:20:06.829198] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.251 [2024-06-11 09:20:06.829199] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.251 09:20:06 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:35.251 09:20:06 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:05:35.251 09:20:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:35.251 09:20:06 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.251 09:20:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.251 POWER: Env isn't set yet! 00:05:35.251 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:35.251 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.251 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.251 POWER: Attempting to initialise PSTAT power management... 00:05:35.251 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:35.251 POWER: Initialized successfully for lcore 0 power management 00:05:35.251 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:35.251 POWER: Initialized successfully for lcore 1 power management 00:05:35.251 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:35.251 POWER: Initialized successfully for lcore 2 power management 00:05:35.251 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:35.251 POWER: Initialized successfully for lcore 3 power management 00:05:35.251 [2024-06-11 09:20:06.959675] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:35.251 [2024-06-11 09:20:06.959687] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:35.251 [2024-06-11 09:20:06.959693] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:35.251 09:20:06 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.251 09:20:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:35.251 09:20:06 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.251 09:20:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.251 [2024-06-11 09:20:07.020376] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:35.251 09:20:07 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.251 09:20:07 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:35.251 09:20:07 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:35.251 09:20:07 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:35.251 09:20:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.251 ************************************ 00:05:35.251 START TEST scheduler_create_thread 00:05:35.251 ************************************ 00:05:35.251 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:05:35.251 09:20:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:35.251 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.251 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.513 2 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.513 3 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.513 4 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.513 5 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.513 6 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.513 7 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.513 8 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.513 9 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.513 09:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.899 10 00:05:36.899 09:20:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:36.899 09:20:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:36.899 09:20:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:36.900 09:20:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.470 09:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:37.470 09:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:37.470 09:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:37.470 09:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:37.470 09:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.412 09:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:38.412 09:20:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:38.412 09:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:38.412 09:20:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.010 09:20:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:39.010 09:20:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:39.010 09:20:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:39.010 09:20:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:39.010 09:20:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.595 09:20:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:39.595 00:05:39.595 real 0m4.213s 00:05:39.595 user 0m0.027s 00:05:39.595 sys 0m0.004s 00:05:39.595 09:20:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:39.595 09:20:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.595 ************************************ 00:05:39.595 END TEST scheduler_create_thread 00:05:39.595 ************************************ 00:05:39.595 09:20:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:39.595 09:20:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 896051 00:05:39.595 09:20:11 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 896051 ']' 00:05:39.595 09:20:11 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 896051 00:05:39.595 09:20:11 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:05:39.595 09:20:11 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:39.595 09:20:11 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 896051 00:05:39.595 09:20:11 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:05:39.595 09:20:11 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:05:39.595 09:20:11 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 896051' 00:05:39.595 killing process with pid 896051 00:05:39.595 09:20:11 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 896051 00:05:39.595 09:20:11 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 896051 00:05:39.856 [2024-06-11 09:20:11.548500] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:40.117 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:40.117 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:40.117 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:40.117 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:40.117 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:40.117 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:40.117 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:40.117 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:40.117 00:05:40.117 real 0m5.176s 00:05:40.117 user 0m10.988s 00:05:40.117 sys 0m0.322s 00:05:40.117 09:20:11 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:40.117 09:20:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.117 ************************************ 00:05:40.117 END TEST event_scheduler 00:05:40.117 ************************************ 00:05:40.117 09:20:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:40.117 09:20:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:40.117 09:20:11 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:40.117 09:20:11 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:40.117 09:20:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.117 ************************************ 00:05:40.117 START TEST app_repeat 00:05:40.117 ************************************ 00:05:40.117 09:20:11 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:05:40.117 09:20:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.117 09:20:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.117 09:20:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:40.117 09:20:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.117 09:20:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:40.117 09:20:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:40.117 09:20:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:40.117 09:20:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=897148 00:05:40.117 09:20:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.117 09:20:11 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:40.117 09:20:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 897148' 00:05:40.117 Process app_repeat pid: 897148 00:05:40.117 09:20:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.117 09:20:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:40.117 spdk_app_start Round 0 00:05:40.117 09:20:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 897148 /var/tmp/spdk-nbd.sock 00:05:40.117 09:20:11 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 897148 ']' 00:05:40.117 09:20:11 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.117 09:20:11 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:40.117 09:20:11 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.117 09:20:11 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:40.117 09:20:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.117 [2024-06-11 09:20:11.856299] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:40.117 [2024-06-11 09:20:11.856374] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid897148 ] 00:05:40.117 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.378 [2024-06-11 09:20:11.937468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.378 [2024-06-11 09:20:12.009440] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.378 [2024-06-11 09:20:12.009456] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.378 09:20:12 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:40.378 09:20:12 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:40.378 09:20:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.639 Malloc0 00:05:40.639 09:20:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.899 Malloc1 00:05:40.899 09:20:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.899 09:20:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.160 /dev/nbd0 00:05:41.160 09:20:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.160 09:20:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.160 09:20:12 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:41.160 09:20:12 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:41.160 09:20:12 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:41.160 09:20:12 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:41.160 09:20:12 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:41.160 09:20:12 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:41.160 09:20:12 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:41.160 09:20:12 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:41.160 09:20:12 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.160 1+0 records in 00:05:41.160 1+0 records out 00:05:41.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198926 s, 20.6 MB/s 00:05:41.160 09:20:12 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.160 09:20:12 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:41.160 09:20:12 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.160 09:20:12 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:41.160 09:20:12 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:41.160 09:20:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.160 09:20:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.160 09:20:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.421 /dev/nbd1 00:05:41.421 09:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.421 09:20:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.421 09:20:13 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:41.421 09:20:13 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:41.421 09:20:13 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:41.421 09:20:13 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:41.421 09:20:13 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:41.421 09:20:13 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:41.421 09:20:13 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:41.421 09:20:13 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:41.421 09:20:13 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.421 1+0 records in 00:05:41.421 1+0 records out 00:05:41.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244294 s, 16.8 MB/s 00:05:41.421 09:20:13 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.421 09:20:13 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:41.421 09:20:13 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.421 09:20:13 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:41.421 09:20:13 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:41.421 09:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.421 09:20:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.421 09:20:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.421 09:20:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.421 09:20:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.682 { 00:05:41.682 "nbd_device": "/dev/nbd0", 00:05:41.682 "bdev_name": "Malloc0" 00:05:41.682 }, 00:05:41.682 { 00:05:41.682 "nbd_device": "/dev/nbd1", 00:05:41.682 "bdev_name": "Malloc1" 00:05:41.682 } 00:05:41.682 ]' 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.682 { 00:05:41.682 "nbd_device": "/dev/nbd0", 00:05:41.682 "bdev_name": "Malloc0" 00:05:41.682 }, 00:05:41.682 { 00:05:41.682 "nbd_device": "/dev/nbd1", 00:05:41.682 "bdev_name": "Malloc1" 00:05:41.682 } 00:05:41.682 ]' 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.682 /dev/nbd1' 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.682 /dev/nbd1' 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.682 256+0 records in 00:05:41.682 256+0 records out 00:05:41.682 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00417602 s, 251 MB/s 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.682 256+0 records in 00:05:41.682 256+0 records out 00:05:41.682 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184913 s, 56.7 MB/s 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.682 256+0 records in 00:05:41.682 256+0 records out 00:05:41.682 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165181 s, 63.5 MB/s 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.682 09:20:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.943 09:20:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.943 09:20:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.943 09:20:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.943 09:20:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.943 09:20:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.943 09:20:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.943 09:20:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.943 09:20:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.943 09:20:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.943 09:20:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.204 09:20:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.204 09:20:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.204 09:20:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.204 09:20:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.204 09:20:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.204 09:20:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.204 09:20:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.204 09:20:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.204 09:20:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.204 09:20:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.204 09:20:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.204 09:20:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.204 09:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.204 09:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.464 09:20:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.464 09:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.464 09:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.464 09:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.464 09:20:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.464 09:20:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.464 09:20:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.464 09:20:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.464 09:20:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.464 09:20:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.464 09:20:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.725 [2024-06-11 09:20:14.386824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.725 [2024-06-11 09:20:14.450576] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.725 [2024-06-11 09:20:14.450582] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.725 [2024-06-11 09:20:14.481876] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.725 [2024-06-11 09:20:14.481908] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.025 09:20:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:46.025 09:20:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:46.025 spdk_app_start Round 1 00:05:46.025 09:20:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 897148 /var/tmp/spdk-nbd.sock 00:05:46.025 09:20:17 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 897148 ']' 00:05:46.025 09:20:17 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.025 09:20:17 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:46.025 09:20:17 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.025 09:20:17 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:46.025 09:20:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.025 09:20:17 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:46.025 09:20:17 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:46.025 09:20:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.025 Malloc0 00:05:46.025 09:20:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.286 Malloc1 00:05:46.286 09:20:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.286 09:20:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.286 /dev/nbd0 00:05:46.286 09:20:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.547 09:20:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.547 09:20:18 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:46.547 09:20:18 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:46.547 09:20:18 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:46.547 09:20:18 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:46.547 09:20:18 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:46.547 09:20:18 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:46.547 09:20:18 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:46.547 09:20:18 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:46.547 09:20:18 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.547 1+0 records in 00:05:46.547 1+0 records out 00:05:46.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271984 s, 15.1 MB/s 00:05:46.547 09:20:18 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.547 09:20:18 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:46.547 09:20:18 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.548 09:20:18 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:46.548 09:20:18 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:46.548 09:20:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.548 09:20:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.548 09:20:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.548 /dev/nbd1 00:05:46.548 09:20:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.548 09:20:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.548 09:20:18 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:46.548 09:20:18 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:46.548 09:20:18 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:46.548 09:20:18 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:46.548 09:20:18 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:46.548 09:20:18 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:46.548 09:20:18 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:46.548 09:20:18 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:46.548 09:20:18 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.548 1+0 records in 00:05:46.548 1+0 records out 00:05:46.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282333 s, 14.5 MB/s 00:05:46.548 09:20:18 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.808 09:20:18 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:46.808 09:20:18 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.808 09:20:18 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:46.808 09:20:18 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:46.808 09:20:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.808 09:20:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.808 09:20:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.808 09:20:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.808 09:20:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.808 09:20:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.808 { 00:05:46.808 "nbd_device": "/dev/nbd0", 00:05:46.808 "bdev_name": "Malloc0" 00:05:46.808 }, 00:05:46.808 { 00:05:46.808 "nbd_device": "/dev/nbd1", 00:05:46.808 "bdev_name": "Malloc1" 00:05:46.808 } 00:05:46.808 ]' 00:05:46.808 09:20:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.808 { 00:05:46.808 "nbd_device": "/dev/nbd0", 00:05:46.808 "bdev_name": "Malloc0" 00:05:46.808 }, 00:05:46.808 { 00:05:46.808 "nbd_device": "/dev/nbd1", 00:05:46.808 "bdev_name": "Malloc1" 00:05:46.808 } 00:05:46.808 ]' 00:05:46.808 09:20:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.808 09:20:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.808 /dev/nbd1' 00:05:46.808 09:20:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.808 /dev/nbd1' 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.069 256+0 records in 00:05:47.069 256+0 records out 00:05:47.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124388 s, 84.3 MB/s 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.069 256+0 records in 00:05:47.069 256+0 records out 00:05:47.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156632 s, 66.9 MB/s 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.069 256+0 records in 00:05:47.069 256+0 records out 00:05:47.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167852 s, 62.5 MB/s 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.069 09:20:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.070 09:20:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.070 09:20:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.070 09:20:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.070 09:20:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.070 09:20:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.070 09:20:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.070 09:20:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.070 09:20:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.070 09:20:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.070 09:20:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.070 09:20:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.070 09:20:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.070 09:20:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.330 09:20:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.330 09:20:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.330 09:20:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.330 09:20:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.330 09:20:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.330 09:20:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.330 09:20:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.330 09:20:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.330 09:20:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.330 09:20:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.330 09:20:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.330 09:20:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.330 09:20:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.330 09:20:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.330 09:20:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.330 09:20:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.330 09:20:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.330 09:20:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.330 09:20:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.331 09:20:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.331 09:20:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.591 09:20:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.591 09:20:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.591 09:20:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.591 09:20:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.591 09:20:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.591 09:20:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.591 09:20:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.591 09:20:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.591 09:20:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.591 09:20:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.591 09:20:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.591 09:20:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.591 09:20:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.852 09:20:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.112 [2024-06-11 09:20:19.732869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.112 [2024-06-11 09:20:19.796864] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.112 [2024-06-11 09:20:19.796870] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.113 [2024-06-11 09:20:19.828979] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.113 [2024-06-11 09:20:19.829018] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.411 09:20:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:51.411 09:20:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:51.411 spdk_app_start Round 2 00:05:51.411 09:20:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 897148 /var/tmp/spdk-nbd.sock 00:05:51.411 09:20:22 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 897148 ']' 00:05:51.411 09:20:22 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.411 09:20:22 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:51.411 09:20:22 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.411 09:20:22 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:51.411 09:20:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.411 09:20:22 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:51.411 09:20:22 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:51.411 09:20:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.411 Malloc0 00:05:51.411 09:20:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.411 Malloc1 00:05:51.411 09:20:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.411 09:20:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.673 /dev/nbd0 00:05:51.673 09:20:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.673 09:20:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.673 09:20:23 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:51.673 09:20:23 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:51.673 09:20:23 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:51.673 09:20:23 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:51.673 09:20:23 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:51.673 09:20:23 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:51.673 09:20:23 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:51.673 09:20:23 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:51.673 09:20:23 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.673 1+0 records in 00:05:51.673 1+0 records out 00:05:51.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353472 s, 11.6 MB/s 00:05:51.673 09:20:23 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.673 09:20:23 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:51.673 09:20:23 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.673 09:20:23 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:51.673 09:20:23 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:51.673 09:20:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.673 09:20:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.673 09:20:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.934 /dev/nbd1 00:05:51.934 09:20:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.934 09:20:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.934 09:20:23 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:51.934 09:20:23 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:51.934 09:20:23 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:51.934 09:20:23 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:51.934 09:20:23 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:51.934 09:20:23 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:51.934 09:20:23 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:51.934 09:20:23 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:51.934 09:20:23 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.934 1+0 records in 00:05:51.934 1+0 records out 00:05:51.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176666 s, 23.2 MB/s 00:05:51.934 09:20:23 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.934 09:20:23 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:51.934 09:20:23 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.934 09:20:23 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:51.934 09:20:23 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:51.934 09:20:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.934 09:20:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.934 09:20:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.934 09:20:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.934 09:20:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.195 { 00:05:52.195 "nbd_device": "/dev/nbd0", 00:05:52.195 "bdev_name": "Malloc0" 00:05:52.195 }, 00:05:52.195 { 00:05:52.195 "nbd_device": "/dev/nbd1", 00:05:52.195 "bdev_name": "Malloc1" 00:05:52.195 } 00:05:52.195 ]' 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.195 { 00:05:52.195 "nbd_device": "/dev/nbd0", 00:05:52.195 "bdev_name": "Malloc0" 00:05:52.195 }, 00:05:52.195 { 00:05:52.195 "nbd_device": "/dev/nbd1", 00:05:52.195 "bdev_name": "Malloc1" 00:05:52.195 } 00:05:52.195 ]' 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.195 /dev/nbd1' 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.195 /dev/nbd1' 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.195 256+0 records in 00:05:52.195 256+0 records out 00:05:52.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124978 s, 83.9 MB/s 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.195 256+0 records in 00:05:52.195 256+0 records out 00:05:52.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159054 s, 65.9 MB/s 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.195 256+0 records in 00:05:52.195 256+0 records out 00:05:52.195 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167617 s, 62.6 MB/s 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.195 09:20:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.457 09:20:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.457 09:20:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.457 09:20:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.457 09:20:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.457 09:20:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.457 09:20:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.457 09:20:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.457 09:20:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.457 09:20:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.457 09:20:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.718 09:20:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.718 09:20:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.718 09:20:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.718 09:20:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.718 09:20:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.718 09:20:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.718 09:20:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.718 09:20:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.718 09:20:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.718 09:20:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.718 09:20:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.980 09:20:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.980 09:20:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.980 09:20:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.980 09:20:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.980 09:20:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.980 09:20:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.980 09:20:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.980 09:20:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.980 09:20:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.980 09:20:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.980 09:20:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.980 09:20:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.980 09:20:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.242 09:20:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:53.242 [2024-06-11 09:20:25.003515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.504 [2024-06-11 09:20:25.067697] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.504 [2024-06-11 09:20:25.067702] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.504 [2024-06-11 09:20:25.098987] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.504 [2024-06-11 09:20:25.099021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.804 09:20:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 897148 /var/tmp/spdk-nbd.sock 00:05:56.804 09:20:27 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 897148 ']' 00:05:56.804 09:20:27 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.804 09:20:27 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:56.804 09:20:27 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.804 09:20:27 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:56.804 09:20:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.804 09:20:28 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:56.804 09:20:28 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:56.804 09:20:28 event.app_repeat -- event/event.sh@39 -- # killprocess 897148 00:05:56.804 09:20:28 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 897148 ']' 00:05:56.804 09:20:28 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 897148 00:05:56.804 09:20:28 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:05:56.804 09:20:28 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:56.804 09:20:28 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 897148 00:05:56.804 09:20:28 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:56.804 09:20:28 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:56.804 09:20:28 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 897148' 00:05:56.804 killing process with pid 897148 00:05:56.804 09:20:28 event.app_repeat -- common/autotest_common.sh@968 -- # kill 897148 00:05:56.804 09:20:28 event.app_repeat -- common/autotest_common.sh@973 -- # wait 897148 00:05:56.804 spdk_app_start is called in Round 0. 00:05:56.804 Shutdown signal received, stop current app iteration 00:05:56.804 Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 reinitialization... 00:05:56.804 spdk_app_start is called in Round 1. 00:05:56.804 Shutdown signal received, stop current app iteration 00:05:56.804 Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 reinitialization... 00:05:56.804 spdk_app_start is called in Round 2. 00:05:56.804 Shutdown signal received, stop current app iteration 00:05:56.804 Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 reinitialization... 00:05:56.804 spdk_app_start is called in Round 3. 00:05:56.804 Shutdown signal received, stop current app iteration 00:05:56.804 09:20:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:56.804 09:20:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:56.804 00:05:56.804 real 0m16.431s 00:05:56.804 user 0m36.369s 00:05:56.804 sys 0m2.374s 00:05:56.804 09:20:28 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.804 09:20:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.804 ************************************ 00:05:56.804 END TEST app_repeat 00:05:56.804 ************************************ 00:05:56.804 09:20:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:56.804 09:20:28 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:56.804 09:20:28 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:56.804 09:20:28 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.804 09:20:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.804 ************************************ 00:05:56.804 START TEST cpu_locks 00:05:56.804 ************************************ 00:05:56.804 09:20:28 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:56.804 * Looking for test storage... 00:05:56.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:56.804 09:20:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:56.805 09:20:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:56.805 09:20:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:56.805 09:20:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:56.805 09:20:28 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:56.805 09:20:28 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.805 09:20:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.805 ************************************ 00:05:56.805 START TEST default_locks 00:05:56.805 ************************************ 00:05:56.805 09:20:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:05:56.805 09:20:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=900632 00:05:56.805 09:20:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 900632 00:05:56.805 09:20:28 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 900632 ']' 00:05:56.805 09:20:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.805 09:20:28 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.805 09:20:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:56.805 09:20:28 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.805 09:20:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:56.805 09:20:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.805 [2024-06-11 09:20:28.521553] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:56.805 [2024-06-11 09:20:28.521623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900632 ] 00:05:56.805 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.805 [2024-06-11 09:20:28.604615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.065 [2024-06-11 09:20:28.684416] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.679 09:20:29 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:57.679 09:20:29 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:05:57.679 09:20:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 900632 00:05:57.680 09:20:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.680 09:20:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 900632 00:05:57.957 lslocks: write error 00:05:57.957 09:20:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 900632 00:05:57.957 09:20:29 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 900632 ']' 00:05:57.957 09:20:29 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 900632 00:05:57.957 09:20:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:05:58.219 09:20:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:58.219 09:20:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 900632 00:05:58.219 09:20:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:58.219 09:20:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:58.219 09:20:29 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 900632' 00:05:58.219 killing process with pid 900632 00:05:58.219 09:20:29 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 900632 00:05:58.219 09:20:29 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 900632 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 900632 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 900632 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 900632 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 900632 ']' 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (900632) - No such process 00:05:58.480 ERROR: process (pid: 900632) is no longer running 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.480 00:05:58.480 real 0m1.583s 00:05:58.480 user 0m1.738s 00:05:58.480 sys 0m0.526s 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:58.480 09:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.480 ************************************ 00:05:58.480 END TEST default_locks 00:05:58.480 ************************************ 00:05:58.480 09:20:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:58.480 09:20:30 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:58.480 09:20:30 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:58.480 09:20:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.480 ************************************ 00:05:58.480 START TEST default_locks_via_rpc 00:05:58.480 ************************************ 00:05:58.480 09:20:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:05:58.480 09:20:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=900973 00:05:58.480 09:20:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 900973 00:05:58.480 09:20:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.480 09:20:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 900973 ']' 00:05:58.480 09:20:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.480 09:20:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:58.480 09:20:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.480 09:20:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:58.480 09:20:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.480 [2024-06-11 09:20:30.170742] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:58.480 [2024-06-11 09:20:30.170797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900973 ] 00:05:58.480 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.480 [2024-06-11 09:20:30.249956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.741 [2024-06-11 09:20:30.328081] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.314 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:59.314 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:59.314 09:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:59.314 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:59.314 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.314 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:59.314 09:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:59.315 09:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:59.315 09:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:59.315 09:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:59.315 09:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:59.315 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:59.315 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.315 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:59.315 09:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 900973 00:05:59.315 09:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 900973 00:05:59.315 09:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.575 09:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 900973 00:05:59.575 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 900973 ']' 00:05:59.575 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 900973 00:05:59.575 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:05:59.575 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:59.575 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 900973 00:05:59.575 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:59.575 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:59.575 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 900973' 00:05:59.575 killing process with pid 900973 00:05:59.575 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 900973 00:05:59.575 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 900973 00:05:59.836 00:05:59.836 real 0m1.378s 00:05:59.836 user 0m1.537s 00:05:59.836 sys 0m0.427s 00:05:59.836 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:59.836 09:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.836 ************************************ 00:05:59.836 END TEST default_locks_via_rpc 00:05:59.836 ************************************ 00:05:59.836 09:20:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:59.836 09:20:31 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:59.836 09:20:31 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:59.836 09:20:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.836 ************************************ 00:05:59.836 START TEST non_locking_app_on_locked_coremask 00:05:59.836 ************************************ 00:05:59.836 09:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:05:59.836 09:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=901208 00:05:59.836 09:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 901208 /var/tmp/spdk.sock 00:05:59.836 09:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 901208 ']' 00:05:59.836 09:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.836 09:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:59.836 09:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.836 09:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:59.836 09:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.836 09:20:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.836 [2024-06-11 09:20:31.614830] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:05:59.836 [2024-06-11 09:20:31.614883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901208 ] 00:05:59.836 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.097 [2024-06-11 09:20:31.690607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.097 [2024-06-11 09:20:31.759898] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.668 09:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:00.668 09:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:00.668 09:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=901476 00:06:00.668 09:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 901476 /var/tmp/spdk2.sock 00:06:00.668 09:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 901476 ']' 00:06:00.668 09:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.668 09:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:00.668 09:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.668 09:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:00.668 09:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.668 09:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:00.929 [2024-06-11 09:20:32.525677] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:00.929 [2024-06-11 09:20:32.525731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901476 ] 00:06:00.929 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.929 [2024-06-11 09:20:32.613011] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.929 [2024-06-11 09:20:32.613038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.929 [2024-06-11 09:20:32.742119] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.870 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:01.870 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:01.870 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 901208 00:06:01.870 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.870 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 901208 00:06:02.130 lslocks: write error 00:06:02.130 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 901208 00:06:02.130 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 901208 ']' 00:06:02.130 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 901208 00:06:02.130 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:02.130 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:02.130 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 901208 00:06:02.130 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:02.130 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:02.130 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 901208' 00:06:02.130 killing process with pid 901208 00:06:02.130 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 901208 00:06:02.130 09:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 901208 00:06:02.700 09:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 901476 00:06:02.700 09:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 901476 ']' 00:06:02.700 09:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 901476 00:06:02.700 09:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:02.700 09:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:02.700 09:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 901476 00:06:02.701 09:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:02.701 09:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:02.701 09:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 901476' 00:06:02.701 killing process with pid 901476 00:06:02.701 09:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 901476 00:06:02.701 09:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 901476 00:06:02.961 00:06:02.961 real 0m2.982s 00:06:02.961 user 0m3.449s 00:06:02.961 sys 0m0.797s 00:06:02.961 09:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:02.961 09:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.961 ************************************ 00:06:02.961 END TEST non_locking_app_on_locked_coremask 00:06:02.961 ************************************ 00:06:02.961 09:20:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:02.961 09:20:34 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:02.961 09:20:34 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:02.961 09:20:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.961 ************************************ 00:06:02.961 START TEST locking_app_on_unlocked_coremask 00:06:02.961 ************************************ 00:06:02.961 09:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:06:02.961 09:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=901850 00:06:02.961 09:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 901850 /var/tmp/spdk.sock 00:06:02.961 09:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 901850 ']' 00:06:02.961 09:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.961 09:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:02.961 09:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.961 09:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:02.961 09:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.961 09:20:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:02.961 [2024-06-11 09:20:34.662145] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:02.961 [2024-06-11 09:20:34.662195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901850 ] 00:06:02.961 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.961 [2024-06-11 09:20:34.738716] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.961 [2024-06-11 09:20:34.738744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.222 [2024-06-11 09:20:34.805061] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.795 09:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:03.795 09:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:03.795 09:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=902179 00:06:03.795 09:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 902179 /var/tmp/spdk2.sock 00:06:03.795 09:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 902179 ']' 00:06:03.795 09:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.795 09:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:03.795 09:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.795 09:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:03.795 09:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.795 09:20:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.795 [2024-06-11 09:20:35.563548] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:03.795 [2024-06-11 09:20:35.563599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902179 ] 00:06:03.795 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.056 [2024-06-11 09:20:35.651020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.056 [2024-06-11 09:20:35.780681] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.627 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:04.627 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:04.627 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 902179 00:06:04.627 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.627 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 902179 00:06:05.199 lslocks: write error 00:06:05.199 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 901850 00:06:05.199 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 901850 ']' 00:06:05.199 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 901850 00:06:05.199 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:05.199 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:05.199 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 901850 00:06:05.199 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:05.199 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:05.200 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 901850' 00:06:05.200 killing process with pid 901850 00:06:05.200 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 901850 00:06:05.200 09:20:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 901850 00:06:05.460 09:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 902179 00:06:05.460 09:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 902179 ']' 00:06:05.460 09:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 902179 00:06:05.460 09:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:05.460 09:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:05.460 09:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 902179 00:06:05.460 09:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:05.460 09:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:05.460 09:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 902179' 00:06:05.460 killing process with pid 902179 00:06:05.460 09:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 902179 00:06:05.460 09:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 902179 00:06:05.721 00:06:05.721 real 0m2.869s 00:06:05.721 user 0m3.295s 00:06:05.721 sys 0m0.788s 00:06:05.721 09:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:05.721 09:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.721 ************************************ 00:06:05.721 END TEST locking_app_on_unlocked_coremask 00:06:05.721 ************************************ 00:06:05.721 09:20:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:05.721 09:20:37 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:05.721 09:20:37 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:05.721 09:20:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.982 ************************************ 00:06:05.982 START TEST locking_app_on_locked_coremask 00:06:05.982 ************************************ 00:06:05.982 09:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:06:05.982 09:20:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=902557 00:06:05.982 09:20:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 902557 /var/tmp/spdk.sock 00:06:05.982 09:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 902557 ']' 00:06:05.982 09:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.982 09:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:05.982 09:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.982 09:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:05.982 09:20:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.982 09:20:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.982 [2024-06-11 09:20:37.596597] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:05.982 [2024-06-11 09:20:37.596646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902557 ] 00:06:05.982 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.982 [2024-06-11 09:20:37.671640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.982 [2024-06-11 09:20:37.737496] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=902725 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 902725 /var/tmp/spdk2.sock 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 902725 /var/tmp/spdk2.sock 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 902725 /var/tmp/spdk2.sock 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 902725 ']' 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:06.925 09:20:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.925 [2024-06-11 09:20:38.500355] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:06.925 [2024-06-11 09:20:38.500409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902725 ] 00:06:06.925 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.925 [2024-06-11 09:20:38.586485] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 902557 has claimed it. 00:06:06.925 [2024-06-11 09:20:38.586528] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (902725) - No such process 00:06:07.496 ERROR: process (pid: 902725) is no longer running 00:06:07.496 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:07.496 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:07.496 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:07.496 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:07.496 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:07.496 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:07.496 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 902557 00:06:07.496 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 902557 00:06:07.496 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.067 lslocks: write error 00:06:08.067 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 902557 00:06:08.067 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 902557 ']' 00:06:08.067 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 902557 00:06:08.067 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:08.067 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:08.067 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 902557 00:06:08.067 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:08.067 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:08.067 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 902557' 00:06:08.067 killing process with pid 902557 00:06:08.068 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 902557 00:06:08.068 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 902557 00:06:08.329 00:06:08.329 real 0m2.352s 00:06:08.329 user 0m2.704s 00:06:08.329 sys 0m0.618s 00:06:08.329 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:08.329 09:20:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.329 ************************************ 00:06:08.329 END TEST locking_app_on_locked_coremask 00:06:08.329 ************************************ 00:06:08.329 09:20:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:08.329 09:20:39 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:08.329 09:20:39 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:08.329 09:20:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.329 ************************************ 00:06:08.329 START TEST locking_overlapped_coremask 00:06:08.329 ************************************ 00:06:08.329 09:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:06:08.329 09:20:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=903008 00:06:08.329 09:20:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 903008 /var/tmp/spdk.sock 00:06:08.329 09:20:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:08.329 09:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 903008 ']' 00:06:08.329 09:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.329 09:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:08.329 09:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.329 09:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:08.329 09:20:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.329 [2024-06-11 09:20:40.033132] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:08.329 [2024-06-11 09:20:40.033196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903008 ] 00:06:08.329 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.329 [2024-06-11 09:20:40.113163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.590 [2024-06-11 09:20:40.181814] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.590 [2024-06-11 09:20:40.181930] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.590 [2024-06-11 09:20:40.181933] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.161 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:09.161 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:09.161 09:20:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=903266 00:06:09.161 09:20:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:09.161 09:20:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 903266 /var/tmp/spdk2.sock 00:06:09.161 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:09.161 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 903266 /var/tmp/spdk2.sock 00:06:09.161 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:09.161 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.161 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:09.161 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.161 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 903266 /var/tmp/spdk2.sock 00:06:09.161 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 903266 ']' 00:06:09.161 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.162 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:09.162 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.162 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:09.162 09:20:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.162 [2024-06-11 09:20:40.892647] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:09.162 [2024-06-11 09:20:40.892699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903266 ] 00:06:09.162 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.162 [2024-06-11 09:20:40.963811] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 903008 has claimed it. 00:06:09.162 [2024-06-11 09:20:40.963843] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (903266) - No such process 00:06:10.105 ERROR: process (pid: 903266) is no longer running 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 903008 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 903008 ']' 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 903008 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 903008 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 903008' 00:06:10.105 killing process with pid 903008 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 903008 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 903008 00:06:10.105 00:06:10.105 real 0m1.856s 00:06:10.105 user 0m5.281s 00:06:10.105 sys 0m0.388s 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.105 ************************************ 00:06:10.105 END TEST locking_overlapped_coremask 00:06:10.105 ************************************ 00:06:10.105 09:20:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:10.105 09:20:41 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:10.105 09:20:41 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:10.105 09:20:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.105 ************************************ 00:06:10.105 START TEST locking_overlapped_coremask_via_rpc 00:06:10.105 ************************************ 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=903504 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 903504 /var/tmp/spdk.sock 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 903504 ']' 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:10.105 09:20:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.366 [2024-06-11 09:20:41.952517] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:10.366 [2024-06-11 09:20:41.952570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903504 ] 00:06:10.366 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.366 [2024-06-11 09:20:42.030906] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.366 [2024-06-11 09:20:42.030939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.366 [2024-06-11 09:20:42.102423] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.366 [2024-06-11 09:20:42.102541] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.366 [2024-06-11 09:20:42.102545] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.308 09:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:11.308 09:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:11.308 09:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=903644 00:06:11.308 09:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 903644 /var/tmp/spdk2.sock 00:06:11.308 09:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 903644 ']' 00:06:11.308 09:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:11.308 09:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.308 09:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:11.308 09:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.308 09:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:11.308 09:20:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.308 [2024-06-11 09:20:42.880840] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:11.308 [2024-06-11 09:20:42.880892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903644 ] 00:06:11.308 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.309 [2024-06-11 09:20:42.950703] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.309 [2024-06-11 09:20:42.950727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.309 [2024-06-11 09:20:43.056425] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.309 [2024-06-11 09:20:43.063383] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.309 [2024-06-11 09:20:43.063386] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:12.251 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.252 [2024-06-11 09:20:43.755379] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 903504 has claimed it. 00:06:12.252 request: 00:06:12.252 { 00:06:12.252 "method": "framework_enable_cpumask_locks", 00:06:12.252 "req_id": 1 00:06:12.252 } 00:06:12.252 Got JSON-RPC error response 00:06:12.252 response: 00:06:12.252 { 00:06:12.252 "code": -32603, 00:06:12.252 "message": "Failed to claim CPU core: 2" 00:06:12.252 } 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 903504 /var/tmp/spdk.sock 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 903504 ']' 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 903644 /var/tmp/spdk2.sock 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 903644 ']' 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:12.252 09:20:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.582 09:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:12.582 09:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:12.582 09:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:12.582 09:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.582 09:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.582 09:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.582 00:06:12.582 real 0m2.302s 00:06:12.582 user 0m1.037s 00:06:12.582 sys 0m0.193s 00:06:12.582 09:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:12.582 09:20:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.582 ************************************ 00:06:12.582 END TEST locking_overlapped_coremask_via_rpc 00:06:12.582 ************************************ 00:06:12.582 09:20:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:12.582 09:20:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 903504 ]] 00:06:12.582 09:20:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 903504 00:06:12.582 09:20:44 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 903504 ']' 00:06:12.582 09:20:44 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 903504 00:06:12.582 09:20:44 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:12.582 09:20:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:12.582 09:20:44 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 903504 00:06:12.582 09:20:44 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:12.582 09:20:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:12.583 09:20:44 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 903504' 00:06:12.583 killing process with pid 903504 00:06:12.583 09:20:44 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 903504 00:06:12.583 09:20:44 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 903504 00:06:12.844 09:20:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 903644 ]] 00:06:12.844 09:20:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 903644 00:06:12.844 09:20:44 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 903644 ']' 00:06:12.844 09:20:44 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 903644 00:06:12.844 09:20:44 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:12.844 09:20:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:12.844 09:20:44 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 903644 00:06:12.844 09:20:44 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:12.844 09:20:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:12.844 09:20:44 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 903644' 00:06:12.844 killing process with pid 903644 00:06:12.844 09:20:44 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 903644 00:06:12.844 09:20:44 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 903644 00:06:13.105 09:20:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:13.105 09:20:44 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:13.105 09:20:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 903504 ]] 00:06:13.105 09:20:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 903504 00:06:13.105 09:20:44 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 903504 ']' 00:06:13.105 09:20:44 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 903504 00:06:13.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (903504) - No such process 00:06:13.105 09:20:44 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 903504 is not found' 00:06:13.105 Process with pid 903504 is not found 00:06:13.105 09:20:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 903644 ]] 00:06:13.105 09:20:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 903644 00:06:13.105 09:20:44 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 903644 ']' 00:06:13.105 09:20:44 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 903644 00:06:13.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (903644) - No such process 00:06:13.105 09:20:44 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 903644 is not found' 00:06:13.105 Process with pid 903644 is not found 00:06:13.105 09:20:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:13.105 00:06:13.105 real 0m16.438s 00:06:13.105 user 0m29.840s 00:06:13.105 sys 0m4.597s 00:06:13.105 09:20:44 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:13.105 09:20:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.105 ************************************ 00:06:13.105 END TEST cpu_locks 00:06:13.105 ************************************ 00:06:13.105 00:06:13.105 real 0m42.318s 00:06:13.105 user 1m23.834s 00:06:13.105 sys 0m7.947s 00:06:13.105 09:20:44 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:13.105 09:20:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.105 ************************************ 00:06:13.105 END TEST event 00:06:13.105 ************************************ 00:06:13.105 09:20:44 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:13.105 09:20:44 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:13.105 09:20:44 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:13.105 09:20:44 -- common/autotest_common.sh@10 -- # set +x 00:06:13.105 ************************************ 00:06:13.105 START TEST thread 00:06:13.105 ************************************ 00:06:13.105 09:20:44 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:13.366 * Looking for test storage... 00:06:13.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:13.366 09:20:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.366 09:20:44 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:13.366 09:20:44 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:13.366 09:20:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.366 ************************************ 00:06:13.366 START TEST thread_poller_perf 00:06:13.366 ************************************ 00:06:13.366 09:20:45 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.366 [2024-06-11 09:20:45.039420] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:13.366 [2024-06-11 09:20:45.039522] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904148 ] 00:06:13.366 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.366 [2024-06-11 09:20:45.123211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.626 [2024-06-11 09:20:45.194943] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.626 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:14.567 ====================================== 00:06:14.567 busy:2410308556 (cyc) 00:06:14.567 total_run_count: 288000 00:06:14.567 tsc_hz: 2400000000 (cyc) 00:06:14.567 ====================================== 00:06:14.567 poller_cost: 8369 (cyc), 3487 (nsec) 00:06:14.567 00:06:14.567 real 0m1.240s 00:06:14.567 user 0m1.144s 00:06:14.567 sys 0m0.091s 00:06:14.567 09:20:46 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:14.567 09:20:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.568 ************************************ 00:06:14.568 END TEST thread_poller_perf 00:06:14.568 ************************************ 00:06:14.568 09:20:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.568 09:20:46 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:14.568 09:20:46 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.568 09:20:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.568 ************************************ 00:06:14.568 START TEST thread_poller_perf 00:06:14.568 ************************************ 00:06:14.568 09:20:46 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.568 [2024-06-11 09:20:46.355288] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:14.568 [2024-06-11 09:20:46.355370] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904435 ] 00:06:14.835 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.835 [2024-06-11 09:20:46.435057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.835 [2024-06-11 09:20:46.499312] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.835 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:15.777 ====================================== 00:06:15.777 busy:2402058856 (cyc) 00:06:15.777 total_run_count: 3811000 00:06:15.777 tsc_hz: 2400000000 (cyc) 00:06:15.777 ====================================== 00:06:15.777 poller_cost: 630 (cyc), 262 (nsec) 00:06:15.777 00:06:15.777 real 0m1.219s 00:06:15.777 user 0m1.141s 00:06:15.777 sys 0m0.073s 00:06:15.777 09:20:47 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.777 09:20:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.777 ************************************ 00:06:15.777 END TEST thread_poller_perf 00:06:15.777 ************************************ 00:06:15.777 09:20:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:15.777 00:06:15.777 real 0m2.716s 00:06:15.777 user 0m2.381s 00:06:15.777 sys 0m0.342s 00:06:15.777 09:20:47 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.777 09:20:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.777 ************************************ 00:06:15.777 END TEST thread 00:06:15.777 ************************************ 00:06:16.039 09:20:47 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:16.039 09:20:47 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:16.039 09:20:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:16.039 09:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:16.039 ************************************ 00:06:16.039 START TEST accel 00:06:16.039 ************************************ 00:06:16.039 09:20:47 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:16.039 * Looking for test storage... 00:06:16.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:16.039 09:20:47 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:16.039 09:20:47 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:16.039 09:20:47 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:16.039 09:20:47 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=904831 00:06:16.039 09:20:47 accel -- accel/accel.sh@63 -- # waitforlisten 904831 00:06:16.039 09:20:47 accel -- common/autotest_common.sh@830 -- # '[' -z 904831 ']' 00:06:16.039 09:20:47 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.039 09:20:47 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:16.039 09:20:47 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:16.039 09:20:47 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.039 09:20:47 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:16.039 09:20:47 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:16.039 09:20:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.039 09:20:47 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.039 09:20:47 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.039 09:20:47 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.039 09:20:47 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.039 09:20:47 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.039 09:20:47 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:16.039 09:20:47 accel -- accel/accel.sh@41 -- # jq -r . 00:06:16.039 [2024-06-11 09:20:47.830851] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:16.039 [2024-06-11 09:20:47.830914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904831 ] 00:06:16.300 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.300 [2024-06-11 09:20:47.908629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.300 [2024-06-11 09:20:47.980407] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.871 09:20:48 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:16.871 09:20:48 accel -- common/autotest_common.sh@863 -- # return 0 00:06:16.871 09:20:48 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:16.871 09:20:48 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:16.871 09:20:48 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:16.871 09:20:48 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:16.871 09:20:48 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:17.132 09:20:48 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:17.132 09:20:48 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:17.132 09:20:48 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:17.132 09:20:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.132 09:20:48 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:17.132 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.132 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # IFS== 00:06:17.133 09:20:48 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:17.133 09:20:48 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:17.133 09:20:48 accel -- accel/accel.sh@75 -- # killprocess 904831 00:06:17.133 09:20:48 accel -- common/autotest_common.sh@949 -- # '[' -z 904831 ']' 00:06:17.133 09:20:48 accel -- common/autotest_common.sh@953 -- # kill -0 904831 00:06:17.133 09:20:48 accel -- common/autotest_common.sh@954 -- # uname 00:06:17.133 09:20:48 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:17.133 09:20:48 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 904831 00:06:17.133 09:20:48 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:17.133 09:20:48 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:17.133 09:20:48 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 904831' 00:06:17.133 killing process with pid 904831 00:06:17.133 09:20:48 accel -- common/autotest_common.sh@968 -- # kill 904831 00:06:17.133 09:20:48 accel -- common/autotest_common.sh@973 -- # wait 904831 00:06:17.395 09:20:49 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:17.395 09:20:49 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:17.395 09:20:49 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:17.395 09:20:49 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.395 09:20:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.395 09:20:49 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:06:17.395 09:20:49 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:17.395 09:20:49 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:17.395 09:20:49 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.395 09:20:49 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.395 09:20:49 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.395 09:20:49 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.395 09:20:49 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.395 09:20:49 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:17.395 09:20:49 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:17.395 09:20:49 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.395 09:20:49 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:17.395 09:20:49 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:17.395 09:20:49 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:17.395 09:20:49 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.395 09:20:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.395 ************************************ 00:06:17.395 START TEST accel_missing_filename 00:06:17.395 ************************************ 00:06:17.395 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:06:17.395 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:17.395 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:17.395 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:17.395 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:17.395 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:17.395 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:17.395 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:17.395 09:20:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:17.395 09:20:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:17.395 09:20:49 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.395 09:20:49 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.395 09:20:49 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.395 09:20:49 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.395 09:20:49 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.395 09:20:49 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:17.395 09:20:49 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:17.395 [2024-06-11 09:20:49.178817] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:17.395 [2024-06-11 09:20:49.178929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905199 ] 00:06:17.656 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.656 [2024-06-11 09:20:49.261732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.656 [2024-06-11 09:20:49.326929] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.656 [2024-06-11 09:20:49.358582] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.656 [2024-06-11 09:20:49.395362] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:17.656 A filename is required. 00:06:17.656 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:17.656 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:17.656 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:17.656 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:17.656 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:17.656 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:17.656 00:06:17.656 real 0m0.305s 00:06:17.656 user 0m0.223s 00:06:17.656 sys 0m0.123s 00:06:17.656 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.656 09:20:49 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:17.656 ************************************ 00:06:17.656 END TEST accel_missing_filename 00:06:17.656 ************************************ 00:06:17.917 09:20:49 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.917 09:20:49 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:17.917 09:20:49 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.917 09:20:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.917 ************************************ 00:06:17.917 START TEST accel_compress_verify 00:06:17.917 ************************************ 00:06:17.917 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.917 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:17.917 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.917 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:17.917 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:17.917 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:17.917 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:17.917 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.917 09:20:49 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.917 09:20:49 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:17.917 09:20:49 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.917 09:20:49 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.917 09:20:49 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.917 09:20:49 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.917 09:20:49 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.917 09:20:49 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:17.917 09:20:49 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:17.917 [2024-06-11 09:20:49.554583] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:17.917 [2024-06-11 09:20:49.554684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905221 ] 00:06:17.917 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.917 [2024-06-11 09:20:49.634697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.917 [2024-06-11 09:20:49.707105] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.178 [2024-06-11 09:20:49.739737] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.178 [2024-06-11 09:20:49.776822] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:18.178 00:06:18.178 Compression does not support the verify option, aborting. 00:06:18.178 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:18.178 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:18.178 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:18.178 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:18.178 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:18.178 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:18.178 00:06:18.178 real 0m0.307s 00:06:18.178 user 0m0.234s 00:06:18.178 sys 0m0.113s 00:06:18.178 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:18.178 09:20:49 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:18.178 ************************************ 00:06:18.178 END TEST accel_compress_verify 00:06:18.178 ************************************ 00:06:18.178 09:20:49 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:18.178 09:20:49 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:18.178 09:20:49 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:18.178 09:20:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.178 ************************************ 00:06:18.178 START TEST accel_wrong_workload 00:06:18.178 ************************************ 00:06:18.178 09:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:06:18.178 09:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:18.178 09:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:18.178 09:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:18.178 09:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:18.178 09:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:18.178 09:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:18.178 09:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:18.178 09:20:49 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:18.178 09:20:49 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:18.178 09:20:49 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.178 09:20:49 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.178 09:20:49 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.178 09:20:49 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.178 09:20:49 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.178 09:20:49 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:18.178 09:20:49 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:18.178 Unsupported workload type: foobar 00:06:18.178 [2024-06-11 09:20:49.935671] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:18.178 accel_perf options: 00:06:18.178 [-h help message] 00:06:18.178 [-q queue depth per core] 00:06:18.178 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:18.178 [-T number of threads per core 00:06:18.178 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:18.178 [-t time in seconds] 00:06:18.178 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:18.178 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:18.178 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:18.178 [-l for compress/decompress workloads, name of uncompressed input file 00:06:18.178 [-S for crc32c workload, use this seed value (default 0) 00:06:18.178 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:18.178 [-f for fill workload, use this BYTE value (default 255) 00:06:18.178 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:18.178 [-y verify result if this switch is on] 00:06:18.178 [-a tasks to allocate per core (default: same value as -q)] 00:06:18.178 Can be used to spread operations across a wider range of memory. 00:06:18.178 09:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:18.178 09:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:18.178 09:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:18.178 09:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:18.178 00:06:18.178 real 0m0.037s 00:06:18.178 user 0m0.023s 00:06:18.178 sys 0m0.013s 00:06:18.178 09:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:18.178 09:20:49 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:18.178 ************************************ 00:06:18.178 END TEST accel_wrong_workload 00:06:18.178 ************************************ 00:06:18.178 Error: writing output failed: Broken pipe 00:06:18.178 09:20:49 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:18.178 09:20:49 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:18.178 09:20:49 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:18.178 09:20:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.440 ************************************ 00:06:18.440 START TEST accel_negative_buffers 00:06:18.440 ************************************ 00:06:18.440 09:20:50 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:18.440 09:20:50 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:18.440 09:20:50 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:18.440 09:20:50 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:18.440 09:20:50 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:18.440 09:20:50 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:18.440 09:20:50 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:18.440 09:20:50 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:18.440 09:20:50 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:18.440 09:20:50 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:18.440 09:20:50 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.440 09:20:50 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.440 09:20:50 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.440 09:20:50 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.440 09:20:50 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.440 09:20:50 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:18.440 09:20:50 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:18.440 -x option must be non-negative. 00:06:18.440 [2024-06-11 09:20:50.051288] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:18.440 accel_perf options: 00:06:18.440 [-h help message] 00:06:18.440 [-q queue depth per core] 00:06:18.440 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:18.440 [-T number of threads per core 00:06:18.440 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:18.440 [-t time in seconds] 00:06:18.440 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:18.440 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:18.440 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:18.440 [-l for compress/decompress workloads, name of uncompressed input file 00:06:18.440 [-S for crc32c workload, use this seed value (default 0) 00:06:18.440 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:18.440 [-f for fill workload, use this BYTE value (default 255) 00:06:18.440 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:18.440 [-y verify result if this switch is on] 00:06:18.440 [-a tasks to allocate per core (default: same value as -q)] 00:06:18.440 Can be used to spread operations across a wider range of memory. 00:06:18.440 09:20:50 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:18.440 09:20:50 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:18.440 09:20:50 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:18.440 09:20:50 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:18.440 00:06:18.440 real 0m0.038s 00:06:18.440 user 0m0.021s 00:06:18.440 sys 0m0.016s 00:06:18.440 09:20:50 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:18.440 09:20:50 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:18.440 ************************************ 00:06:18.440 END TEST accel_negative_buffers 00:06:18.440 ************************************ 00:06:18.440 Error: writing output failed: Broken pipe 00:06:18.440 09:20:50 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:18.440 09:20:50 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:18.440 09:20:50 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:18.440 09:20:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.440 ************************************ 00:06:18.440 START TEST accel_crc32c 00:06:18.440 ************************************ 00:06:18.440 09:20:50 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:18.440 09:20:50 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:18.440 09:20:50 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:18.441 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.441 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.441 09:20:50 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:18.441 09:20:50 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:18.441 09:20:50 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:18.441 09:20:50 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.441 09:20:50 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.441 09:20:50 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.441 09:20:50 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.441 09:20:50 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.441 09:20:50 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:18.441 09:20:50 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:18.441 [2024-06-11 09:20:50.165757] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:18.441 [2024-06-11 09:20:50.165835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905385 ] 00:06:18.441 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.441 [2024-06-11 09:20:50.248671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.702 [2024-06-11 09:20:50.329455] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.702 09:20:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:19.645 09:20:51 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.645 00:06:19.645 real 0m1.325s 00:06:19.645 user 0m1.212s 00:06:19.645 sys 0m0.124s 00:06:19.645 09:20:51 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:19.645 09:20:51 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:19.906 ************************************ 00:06:19.906 END TEST accel_crc32c 00:06:19.906 ************************************ 00:06:19.906 09:20:51 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:19.906 09:20:51 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:19.906 09:20:51 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:19.906 09:20:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.906 ************************************ 00:06:19.906 START TEST accel_crc32c_C2 00:06:19.906 ************************************ 00:06:19.906 09:20:51 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:19.906 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.906 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:19.906 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.906 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.906 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:19.906 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:19.906 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.906 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.906 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.907 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.907 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.907 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.907 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.907 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:19.907 [2024-06-11 09:20:51.563294] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:19.907 [2024-06-11 09:20:51.563395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905640 ] 00:06:19.907 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.907 [2024-06-11 09:20:51.644327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.907 [2024-06-11 09:20:51.718407] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.167 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.168 09:20:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:21.109 09:20:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.109 00:06:21.109 real 0m1.314s 00:06:21.110 user 0m1.210s 00:06:21.110 sys 0m0.114s 00:06:21.110 09:20:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:21.110 09:20:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:21.110 ************************************ 00:06:21.110 END TEST accel_crc32c_C2 00:06:21.110 ************************************ 00:06:21.110 09:20:52 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:21.110 09:20:52 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:21.110 09:20:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:21.110 09:20:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.110 ************************************ 00:06:21.110 START TEST accel_copy 00:06:21.110 ************************************ 00:06:21.110 09:20:52 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:06:21.110 09:20:52 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:21.110 09:20:52 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:21.110 09:20:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.110 09:20:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.110 09:20:52 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:21.371 09:20:52 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:21.371 09:20:52 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:21.371 09:20:52 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.371 09:20:52 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.371 09:20:52 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.371 09:20:52 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.371 09:20:52 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.371 09:20:52 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:21.371 09:20:52 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:21.371 [2024-06-11 09:20:52.950102] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:21.371 [2024-06-11 09:20:52.950194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905995 ] 00:06:21.371 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.371 [2024-06-11 09:20:53.028230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.371 [2024-06-11 09:20:53.099499] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.371 09:20:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:21.372 09:20:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.372 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.372 09:20:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:22.754 09:20:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.754 00:06:22.754 real 0m1.308s 00:06:22.754 user 0m1.201s 00:06:22.754 sys 0m0.118s 00:06:22.754 09:20:54 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:22.754 09:20:54 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:22.754 ************************************ 00:06:22.754 END TEST accel_copy 00:06:22.754 ************************************ 00:06:22.754 09:20:54 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.754 09:20:54 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:22.754 09:20:54 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:22.754 09:20:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.754 ************************************ 00:06:22.754 START TEST accel_fill 00:06:22.754 ************************************ 00:06:22.754 09:20:54 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.754 09:20:54 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:22.754 09:20:54 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:22.754 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.754 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.754 09:20:54 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.754 09:20:54 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.754 09:20:54 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:22.754 09:20:54 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.754 09:20:54 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.754 09:20:54 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.754 09:20:54 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.754 09:20:54 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.754 09:20:54 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:22.754 09:20:54 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:22.754 [2024-06-11 09:20:54.331312] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:22.754 [2024-06-11 09:20:54.331387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906345 ] 00:06:22.755 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.755 [2024-06-11 09:20:54.410617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.755 [2024-06-11 09:20:54.484385] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:22.755 09:20:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:24.140 09:20:55 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.140 00:06:24.140 real 0m1.311s 00:06:24.140 user 0m1.209s 00:06:24.140 sys 0m0.112s 00:06:24.140 09:20:55 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:24.140 09:20:55 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:24.140 ************************************ 00:06:24.140 END TEST accel_fill 00:06:24.140 ************************************ 00:06:24.140 09:20:55 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:24.140 09:20:55 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:24.140 09:20:55 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:24.140 09:20:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.140 ************************************ 00:06:24.140 START TEST accel_copy_crc32c 00:06:24.140 ************************************ 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:24.140 [2024-06-11 09:20:55.717835] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:24.140 [2024-06-11 09:20:55.717936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906700 ] 00:06:24.140 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.140 [2024-06-11 09:20:55.805164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.140 [2024-06-11 09:20:55.878627] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.140 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:24.141 09:20:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.524 00:06:25.524 real 0m1.320s 00:06:25.524 user 0m1.211s 00:06:25.524 sys 0m0.121s 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.524 09:20:57 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:25.524 ************************************ 00:06:25.524 END TEST accel_copy_crc32c 00:06:25.524 ************************************ 00:06:25.524 09:20:57 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:25.524 09:20:57 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:25.524 09:20:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:25.524 09:20:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.524 ************************************ 00:06:25.524 START TEST accel_copy_crc32c_C2 00:06:25.524 ************************************ 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:25.524 [2024-06-11 09:20:57.109833] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:25.524 [2024-06-11 09:20:57.109927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906927 ] 00:06:25.524 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.524 [2024-06-11 09:20:57.188186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.524 [2024-06-11 09:20:57.256678] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:25.524 09:20:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.907 00:06:26.907 real 0m1.306s 00:06:26.907 user 0m1.200s 00:06:26.907 sys 0m0.118s 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:26.907 09:20:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:26.907 ************************************ 00:06:26.907 END TEST accel_copy_crc32c_C2 00:06:26.907 ************************************ 00:06:26.907 09:20:58 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:26.907 09:20:58 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:26.907 09:20:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:26.907 09:20:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.907 ************************************ 00:06:26.907 START TEST accel_dualcast 00:06:26.907 ************************************ 00:06:26.907 09:20:58 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:26.907 [2024-06-11 09:20:58.489169] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:26.907 [2024-06-11 09:20:58.489255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907112 ] 00:06:26.907 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.907 [2024-06-11 09:20:58.569564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.907 [2024-06-11 09:20:58.642536] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.907 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:26.908 09:20:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.293 09:20:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:20:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.294 09:20:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:20:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:20:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:20:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:28.294 09:20:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:20:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:20:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:20:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.294 09:20:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:28.294 09:20:59 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.294 00:06:28.294 real 0m1.311s 00:06:28.294 user 0m1.204s 00:06:28.294 sys 0m0.118s 00:06:28.294 09:20:59 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:28.294 09:20:59 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:28.294 ************************************ 00:06:28.294 END TEST accel_dualcast 00:06:28.294 ************************************ 00:06:28.294 09:20:59 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:28.294 09:20:59 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:28.294 09:20:59 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:28.294 09:20:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.294 ************************************ 00:06:28.294 START TEST accel_compare 00:06:28.294 ************************************ 00:06:28.294 09:20:59 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:06:28.294 09:20:59 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:28.294 09:20:59 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:28.294 09:20:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:20:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:20:59 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:28.294 09:20:59 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:28.294 09:20:59 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:28.294 09:20:59 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.294 09:20:59 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.294 09:20:59 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.294 09:20:59 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.294 09:20:59 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.294 09:20:59 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:28.294 09:20:59 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:28.294 [2024-06-11 09:20:59.874428] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:28.294 [2024-06-11 09:20:59.874521] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907439 ] 00:06:28.294 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.294 [2024-06-11 09:20:59.953137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.294 [2024-06-11 09:21:00.034526] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:28.294 09:21:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:29.707 09:21:01 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.707 00:06:29.707 real 0m1.321s 00:06:29.707 user 0m1.214s 00:06:29.707 sys 0m0.118s 00:06:29.707 09:21:01 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:29.707 09:21:01 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:29.707 ************************************ 00:06:29.707 END TEST accel_compare 00:06:29.707 ************************************ 00:06:29.707 09:21:01 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:29.707 09:21:01 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:29.707 09:21:01 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:29.707 09:21:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.707 ************************************ 00:06:29.707 START TEST accel_xor 00:06:29.707 ************************************ 00:06:29.707 09:21:01 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:29.707 [2024-06-11 09:21:01.271456] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:29.707 [2024-06-11 09:21:01.271567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid907849 ] 00:06:29.707 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.707 [2024-06-11 09:21:01.360519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.707 [2024-06-11 09:21:01.439372] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.707 09:21:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.090 00:06:31.090 real 0m1.328s 00:06:31.090 user 0m1.210s 00:06:31.090 sys 0m0.128s 00:06:31.090 09:21:02 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:31.090 09:21:02 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:31.090 ************************************ 00:06:31.090 END TEST accel_xor 00:06:31.090 ************************************ 00:06:31.090 09:21:02 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:31.090 09:21:02 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:31.090 09:21:02 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:31.090 09:21:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.090 ************************************ 00:06:31.090 START TEST accel_xor 00:06:31.090 ************************************ 00:06:31.090 09:21:02 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:31.090 [2024-06-11 09:21:02.674995] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:31.090 [2024-06-11 09:21:02.675057] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908244 ] 00:06:31.090 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.090 [2024-06-11 09:21:02.756696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.090 [2024-06-11 09:21:02.834049] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:31.090 09:21:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:32.474 09:21:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.474 00:06:32.474 real 0m1.319s 00:06:32.474 user 0m1.208s 00:06:32.474 sys 0m0.121s 00:06:32.474 09:21:03 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:32.474 09:21:03 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:32.475 ************************************ 00:06:32.475 END TEST accel_xor 00:06:32.475 ************************************ 00:06:32.475 09:21:04 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:32.475 09:21:04 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:32.475 09:21:04 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:32.475 09:21:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.475 ************************************ 00:06:32.475 START TEST accel_dif_verify 00:06:32.475 ************************************ 00:06:32.475 09:21:04 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:32.475 [2024-06-11 09:21:04.070407] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:32.475 [2024-06-11 09:21:04.070513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908590 ] 00:06:32.475 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.475 [2024-06-11 09:21:04.157245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.475 [2024-06-11 09:21:04.235920] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:32.475 09:21:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:33.860 09:21:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.860 00:06:33.860 real 0m1.328s 00:06:33.860 user 0m1.210s 00:06:33.860 sys 0m0.131s 00:06:33.860 09:21:05 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:33.860 09:21:05 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:33.860 ************************************ 00:06:33.860 END TEST accel_dif_verify 00:06:33.860 ************************************ 00:06:33.860 09:21:05 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:33.860 09:21:05 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:33.861 09:21:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:33.861 09:21:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.861 ************************************ 00:06:33.861 START TEST accel_dif_generate 00:06:33.861 ************************************ 00:06:33.861 09:21:05 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:33.861 [2024-06-11 09:21:05.468774] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:33.861 [2024-06-11 09:21:05.468840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908790 ] 00:06:33.861 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.861 [2024-06-11 09:21:05.547173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.861 [2024-06-11 09:21:05.620546] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:33.861 09:21:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.244 09:21:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.244 09:21:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.244 09:21:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.244 09:21:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.244 09:21:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.244 09:21:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.244 09:21:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.244 09:21:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.244 09:21:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:35.245 09:21:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.245 00:06:35.245 real 0m1.310s 00:06:35.245 user 0m1.200s 00:06:35.245 sys 0m0.123s 00:06:35.245 09:21:06 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.245 09:21:06 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:35.245 ************************************ 00:06:35.245 END TEST accel_dif_generate 00:06:35.245 ************************************ 00:06:35.245 09:21:06 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:35.245 09:21:06 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:35.245 09:21:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:35.245 09:21:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.245 ************************************ 00:06:35.245 START TEST accel_dif_generate_copy 00:06:35.245 ************************************ 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:35.245 09:21:06 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:35.245 [2024-06-11 09:21:06.855816] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:35.245 [2024-06-11 09:21:06.855900] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908995 ] 00:06:35.245 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.245 [2024-06-11 09:21:06.938019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.245 [2024-06-11 09:21:07.013929] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.245 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.505 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.505 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.505 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.505 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:35.505 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.505 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.505 09:21:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.445 00:06:36.445 real 0m1.318s 00:06:36.445 user 0m1.197s 00:06:36.445 sys 0m0.131s 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:36.445 09:21:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:36.445 ************************************ 00:06:36.445 END TEST accel_dif_generate_copy 00:06:36.445 ************************************ 00:06:36.445 09:21:08 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:36.445 09:21:08 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.445 09:21:08 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:36.445 09:21:08 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:36.445 09:21:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.445 ************************************ 00:06:36.445 START TEST accel_comp 00:06:36.445 ************************************ 00:06:36.445 09:21:08 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.445 09:21:08 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:36.445 09:21:08 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:36.445 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.445 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.445 09:21:08 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.445 09:21:08 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.445 09:21:08 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:36.445 09:21:08 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.445 09:21:08 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.445 09:21:08 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.445 09:21:08 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.445 09:21:08 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.445 09:21:08 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:36.445 09:21:08 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:36.445 [2024-06-11 09:21:08.249525] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:36.445 [2024-06-11 09:21:08.249586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid909338 ] 00:06:36.705 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.705 [2024-06-11 09:21:08.328604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.705 [2024-06-11 09:21:08.396613] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.705 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:36.706 09:21:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:38.089 09:21:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.089 00:06:38.089 real 0m1.310s 00:06:38.089 user 0m1.202s 00:06:38.089 sys 0m0.120s 00:06:38.089 09:21:09 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:38.089 09:21:09 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:38.089 ************************************ 00:06:38.089 END TEST accel_comp 00:06:38.089 ************************************ 00:06:38.089 09:21:09 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.089 09:21:09 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:38.089 09:21:09 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:38.089 09:21:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.089 ************************************ 00:06:38.089 START TEST accel_decomp 00:06:38.089 ************************************ 00:06:38.089 09:21:09 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.089 09:21:09 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:38.089 09:21:09 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:38.089 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:38.090 [2024-06-11 09:21:09.634252] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:38.090 [2024-06-11 09:21:09.634409] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid909687 ] 00:06:38.090 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.090 [2024-06-11 09:21:09.715708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.090 [2024-06-11 09:21:09.791351] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:38.090 09:21:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.500 09:21:10 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.500 00:06:39.500 real 0m1.318s 00:06:39.500 user 0m1.201s 00:06:39.500 sys 0m0.127s 00:06:39.500 09:21:10 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:39.500 09:21:10 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:39.500 ************************************ 00:06:39.500 END TEST accel_decomp 00:06:39.500 ************************************ 00:06:39.500 09:21:10 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.500 09:21:10 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:39.500 09:21:10 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:39.500 09:21:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.500 ************************************ 00:06:39.500 START TEST accel_decomp_full 00:06:39.500 ************************************ 00:06:39.500 09:21:11 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.500 09:21:11 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:39.500 09:21:11 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:39.500 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.500 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.500 09:21:11 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.500 09:21:11 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:39.500 09:21:11 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:39.500 09:21:11 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.500 09:21:11 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.500 09:21:11 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.500 09:21:11 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:39.501 [2024-06-11 09:21:11.028680] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:39.501 [2024-06-11 09:21:11.028752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910358 ] 00:06:39.501 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.501 [2024-06-11 09:21:11.109495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.501 [2024-06-11 09:21:11.180727] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:39.501 09:21:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.885 09:21:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.885 00:06:40.885 real 0m1.324s 00:06:40.885 user 0m1.208s 00:06:40.885 sys 0m0.128s 00:06:40.885 09:21:12 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.885 09:21:12 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:40.885 ************************************ 00:06:40.885 END TEST accel_decomp_full 00:06:40.885 ************************************ 00:06:40.885 09:21:12 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.885 09:21:12 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:40.885 09:21:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:40.885 09:21:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.885 ************************************ 00:06:40.885 START TEST accel_decomp_mcore 00:06:40.885 ************************************ 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:40.885 [2024-06-11 09:21:12.428281] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:40.885 [2024-06-11 09:21:12.428348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910762 ] 00:06:40.885 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.885 [2024-06-11 09:21:12.508289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.885 [2024-06-11 09:21:12.591869] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.885 [2024-06-11 09:21:12.591989] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.885 [2024-06-11 09:21:12.592279] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.885 [2024-06-11 09:21:12.592280] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.885 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.886 09:21:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.268 00:06:42.268 real 0m1.333s 00:06:42.268 user 0m4.455s 00:06:42.268 sys 0m0.130s 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:42.268 09:21:13 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:42.268 ************************************ 00:06:42.268 END TEST accel_decomp_mcore 00:06:42.268 ************************************ 00:06:42.268 09:21:13 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.268 09:21:13 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:42.268 09:21:13 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:42.268 09:21:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.268 ************************************ 00:06:42.268 START TEST accel_decomp_full_mcore 00:06:42.268 ************************************ 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:42.268 09:21:13 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:42.268 [2024-06-11 09:21:13.836222] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:42.268 [2024-06-11 09:21:13.836283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910980 ] 00:06:42.268 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.268 [2024-06-11 09:21:13.916108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.268 [2024-06-11 09:21:13.999246] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.268 [2024-06-11 09:21:13.999389] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.268 [2024-06-11 09:21:13.999690] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.268 [2024-06-11 09:21:13.999691] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.268 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.268 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:42.269 09:21:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.653 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.653 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.653 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.653 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.653 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.653 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.653 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.653 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.653 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.653 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.653 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.653 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.653 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.654 00:06:43.654 real 0m1.343s 00:06:43.654 user 0m4.499s 00:06:43.654 sys 0m0.124s 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:43.654 09:21:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:43.654 ************************************ 00:06:43.654 END TEST accel_decomp_full_mcore 00:06:43.654 ************************************ 00:06:43.654 09:21:15 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.654 09:21:15 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:43.654 09:21:15 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:43.654 09:21:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.654 ************************************ 00:06:43.654 START TEST accel_decomp_mthread 00:06:43.654 ************************************ 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:43.654 [2024-06-11 09:21:15.252887] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:43.654 [2024-06-11 09:21:15.252949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911251 ] 00:06:43.654 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.654 [2024-06-11 09:21:15.333109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.654 [2024-06-11 09:21:15.412894] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.654 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.655 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.655 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:43.655 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.655 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.655 09:21:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.037 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.038 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.038 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.038 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:45.038 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.038 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.038 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.038 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.038 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:45.038 09:21:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.038 00:06:45.038 real 0m1.324s 00:06:45.038 user 0m1.218s 00:06:45.038 sys 0m0.117s 00:06:45.038 09:21:16 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:45.038 09:21:16 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:45.038 ************************************ 00:06:45.038 END TEST accel_decomp_mthread 00:06:45.038 ************************************ 00:06:45.038 09:21:16 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.038 09:21:16 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:45.038 09:21:16 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:45.038 09:21:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.038 ************************************ 00:06:45.038 START TEST accel_decomp_full_mthread 00:06:45.038 ************************************ 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:45.038 [2024-06-11 09:21:16.651507] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:45.038 [2024-06-11 09:21:16.651574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911601 ] 00:06:45.038 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.038 [2024-06-11 09:21:16.732039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.038 [2024-06-11 09:21:16.809289] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:45.038 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:45.299 09:21:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.255 00:06:46.255 real 0m1.350s 00:06:46.255 user 0m1.233s 00:06:46.255 sys 0m0.129s 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.255 09:21:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:46.255 ************************************ 00:06:46.255 END TEST accel_decomp_full_mthread 00:06:46.255 ************************************ 00:06:46.255 09:21:18 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:46.255 09:21:18 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:46.255 09:21:18 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:46.255 09:21:18 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:46.255 09:21:18 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.255 09:21:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.255 09:21:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.255 09:21:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.255 09:21:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.255 09:21:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.255 09:21:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.255 09:21:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:46.255 09:21:18 accel -- accel/accel.sh@41 -- # jq -r . 00:06:46.255 ************************************ 00:06:46.255 START TEST accel_dif_functional_tests 00:06:46.255 ************************************ 00:06:46.255 09:21:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:46.551 [2024-06-11 09:21:18.097608] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:46.551 [2024-06-11 09:21:18.097656] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911949 ] 00:06:46.551 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.552 [2024-06-11 09:21:18.172980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.552 [2024-06-11 09:21:18.246118] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.552 [2024-06-11 09:21:18.246253] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.552 [2024-06-11 09:21:18.246256] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.552 00:06:46.552 00:06:46.552 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.552 http://cunit.sourceforge.net/ 00:06:46.552 00:06:46.552 00:06:46.552 Suite: accel_dif 00:06:46.552 Test: verify: DIF generated, GUARD check ...passed 00:06:46.552 Test: verify: DIF generated, APPTAG check ...passed 00:06:46.552 Test: verify: DIF generated, REFTAG check ...passed 00:06:46.552 Test: verify: DIF not generated, GUARD check ...[2024-06-11 09:21:18.301899] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:46.552 passed 00:06:46.552 Test: verify: DIF not generated, APPTAG check ...[2024-06-11 09:21:18.301944] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:46.552 passed 00:06:46.552 Test: verify: DIF not generated, REFTAG check ...[2024-06-11 09:21:18.301965] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:46.552 passed 00:06:46.552 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:46.552 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-11 09:21:18.302011] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:46.552 passed 00:06:46.552 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:46.552 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:46.552 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:46.552 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-11 09:21:18.302124] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:46.552 passed 00:06:46.552 Test: verify copy: DIF generated, GUARD check ...passed 00:06:46.552 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:46.552 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:46.552 Test: verify copy: DIF not generated, GUARD check ...[2024-06-11 09:21:18.302247] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:46.552 passed 00:06:46.552 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-11 09:21:18.302269] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:46.552 passed 00:06:46.552 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-11 09:21:18.302291] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:46.552 passed 00:06:46.552 Test: generate copy: DIF generated, GUARD check ...passed 00:06:46.552 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:46.552 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:46.552 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:46.552 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:46.552 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:46.552 Test: generate copy: iovecs-len validate ...[2024-06-11 09:21:18.302498] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:46.552 passed 00:06:46.552 Test: generate copy: buffer alignment validate ...passed 00:06:46.552 00:06:46.552 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.552 suites 1 1 n/a 0 0 00:06:46.552 tests 26 26 26 0 0 00:06:46.552 asserts 115 115 115 0 n/a 00:06:46.552 00:06:46.552 Elapsed time = 0.002 seconds 00:06:46.813 00:06:46.813 real 0m0.368s 00:06:46.813 user 0m0.486s 00:06:46.813 sys 0m0.143s 00:06:46.813 09:21:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.813 09:21:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:46.813 ************************************ 00:06:46.813 END TEST accel_dif_functional_tests 00:06:46.813 ************************************ 00:06:46.813 00:06:46.813 real 0m30.788s 00:06:46.813 user 0m34.068s 00:06:46.813 sys 0m4.546s 00:06:46.813 09:21:18 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.813 09:21:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.813 ************************************ 00:06:46.813 END TEST accel 00:06:46.813 ************************************ 00:06:46.813 09:21:18 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:46.813 09:21:18 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:46.813 09:21:18 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.813 09:21:18 -- common/autotest_common.sh@10 -- # set +x 00:06:46.813 ************************************ 00:06:46.813 START TEST accel_rpc 00:06:46.813 ************************************ 00:06:46.813 09:21:18 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:46.813 * Looking for test storage... 00:06:47.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:47.074 09:21:18 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:47.074 09:21:18 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=912025 00:06:47.074 09:21:18 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 912025 00:06:47.074 09:21:18 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:47.074 09:21:18 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 912025 ']' 00:06:47.074 09:21:18 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.074 09:21:18 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:47.074 09:21:18 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.074 09:21:18 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:47.074 09:21:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.074 [2024-06-11 09:21:18.686989] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:47.074 [2024-06-11 09:21:18.687043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912025 ] 00:06:47.074 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.074 [2024-06-11 09:21:18.765714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.074 [2024-06-11 09:21:18.832411] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.015 09:21:19 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:48.015 09:21:19 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:48.015 09:21:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:48.015 09:21:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:48.015 09:21:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:48.015 09:21:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:48.015 09:21:19 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:48.015 09:21:19 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:48.015 09:21:19 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:48.015 09:21:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.015 ************************************ 00:06:48.015 START TEST accel_assign_opcode 00:06:48.015 ************************************ 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.015 [2024-06-11 09:21:19.578546] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.015 [2024-06-11 09:21:19.590570] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:48.015 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:48.015 software 00:06:48.015 00:06:48.016 real 0m0.210s 00:06:48.016 user 0m0.049s 00:06:48.016 sys 0m0.012s 00:06:48.016 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:48.016 09:21:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:48.016 ************************************ 00:06:48.016 END TEST accel_assign_opcode 00:06:48.016 ************************************ 00:06:48.016 09:21:19 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 912025 00:06:48.016 09:21:19 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 912025 ']' 00:06:48.016 09:21:19 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 912025 00:06:48.016 09:21:19 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:06:48.016 09:21:19 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:48.016 09:21:19 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 912025 00:06:48.276 09:21:19 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:48.276 09:21:19 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:48.276 09:21:19 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 912025' 00:06:48.276 killing process with pid 912025 00:06:48.276 09:21:19 accel_rpc -- common/autotest_common.sh@968 -- # kill 912025 00:06:48.276 09:21:19 accel_rpc -- common/autotest_common.sh@973 -- # wait 912025 00:06:48.276 00:06:48.276 real 0m1.550s 00:06:48.276 user 0m1.694s 00:06:48.276 sys 0m0.428s 00:06:48.276 09:21:20 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:48.276 09:21:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.276 ************************************ 00:06:48.276 END TEST accel_rpc 00:06:48.276 ************************************ 00:06:48.537 09:21:20 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:48.537 09:21:20 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:48.537 09:21:20 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:48.537 09:21:20 -- common/autotest_common.sh@10 -- # set +x 00:06:48.537 ************************************ 00:06:48.537 START TEST app_cmdline 00:06:48.537 ************************************ 00:06:48.537 09:21:20 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:48.537 * Looking for test storage... 00:06:48.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:48.537 09:21:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:48.537 09:21:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=912435 00:06:48.537 09:21:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 912435 00:06:48.537 09:21:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:48.537 09:21:20 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 912435 ']' 00:06:48.537 09:21:20 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.537 09:21:20 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:48.537 09:21:20 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.537 09:21:20 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:48.537 09:21:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.537 [2024-06-11 09:21:20.315795] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:06:48.537 [2024-06-11 09:21:20.315859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912435 ] 00:06:48.537 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.798 [2024-06-11 09:21:20.394593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.798 [2024-06-11 09:21:20.468353] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.369 09:21:21 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:49.369 09:21:21 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:06:49.369 09:21:21 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:49.629 { 00:06:49.629 "version": "SPDK v24.09-pre git sha1 b16523e5e", 00:06:49.629 "fields": { 00:06:49.629 "major": 24, 00:06:49.629 "minor": 9, 00:06:49.629 "patch": 0, 00:06:49.629 "suffix": "-pre", 00:06:49.629 "commit": "b16523e5e" 00:06:49.629 } 00:06:49.629 } 00:06:49.629 09:21:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:49.629 09:21:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:49.629 09:21:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:49.629 09:21:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:49.629 09:21:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:49.629 09:21:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:49.629 09:21:21 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:49.630 09:21:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:49.630 09:21:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.630 09:21:21 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:49.630 09:21:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:49.630 09:21:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:49.630 09:21:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.630 09:21:21 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:49.630 09:21:21 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.630 09:21:21 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.630 09:21:21 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:49.630 09:21:21 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.630 09:21:21 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:49.630 09:21:21 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.630 09:21:21 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:49.630 09:21:21 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.630 09:21:21 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:49.630 09:21:21 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.890 request: 00:06:49.890 { 00:06:49.890 "method": "env_dpdk_get_mem_stats", 00:06:49.890 "req_id": 1 00:06:49.890 } 00:06:49.890 Got JSON-RPC error response 00:06:49.890 response: 00:06:49.890 { 00:06:49.890 "code": -32601, 00:06:49.890 "message": "Method not found" 00:06:49.890 } 00:06:49.890 09:21:21 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:49.890 09:21:21 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:49.890 09:21:21 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:49.890 09:21:21 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:49.890 09:21:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 912435 00:06:49.890 09:21:21 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 912435 ']' 00:06:49.890 09:21:21 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 912435 00:06:49.890 09:21:21 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:06:49.890 09:21:21 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:49.890 09:21:21 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 912435 00:06:49.890 09:21:21 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:49.890 09:21:21 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:49.890 09:21:21 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 912435' 00:06:49.890 killing process with pid 912435 00:06:49.890 09:21:21 app_cmdline -- common/autotest_common.sh@968 -- # kill 912435 00:06:49.890 09:21:21 app_cmdline -- common/autotest_common.sh@973 -- # wait 912435 00:06:50.150 00:06:50.150 real 0m1.743s 00:06:50.150 user 0m2.200s 00:06:50.150 sys 0m0.446s 00:06:50.150 09:21:21 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:50.150 09:21:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:50.150 ************************************ 00:06:50.150 END TEST app_cmdline 00:06:50.150 ************************************ 00:06:50.150 09:21:21 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:50.150 09:21:21 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:50.151 09:21:21 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:50.151 09:21:21 -- common/autotest_common.sh@10 -- # set +x 00:06:50.430 ************************************ 00:06:50.430 START TEST version 00:06:50.430 ************************************ 00:06:50.430 09:21:21 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:50.430 * Looking for test storage... 00:06:50.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:50.430 09:21:22 version -- app/version.sh@17 -- # get_header_version major 00:06:50.430 09:21:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:50.430 09:21:22 version -- app/version.sh@14 -- # cut -f2 00:06:50.430 09:21:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.430 09:21:22 version -- app/version.sh@17 -- # major=24 00:06:50.430 09:21:22 version -- app/version.sh@18 -- # get_header_version minor 00:06:50.430 09:21:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:50.430 09:21:22 version -- app/version.sh@14 -- # cut -f2 00:06:50.430 09:21:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.430 09:21:22 version -- app/version.sh@18 -- # minor=9 00:06:50.430 09:21:22 version -- app/version.sh@19 -- # get_header_version patch 00:06:50.430 09:21:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:50.430 09:21:22 version -- app/version.sh@14 -- # cut -f2 00:06:50.430 09:21:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.430 09:21:22 version -- app/version.sh@19 -- # patch=0 00:06:50.430 09:21:22 version -- app/version.sh@20 -- # get_header_version suffix 00:06:50.430 09:21:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:50.430 09:21:22 version -- app/version.sh@14 -- # cut -f2 00:06:50.430 09:21:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:50.430 09:21:22 version -- app/version.sh@20 -- # suffix=-pre 00:06:50.430 09:21:22 version -- app/version.sh@22 -- # version=24.9 00:06:50.430 09:21:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:50.430 09:21:22 version -- app/version.sh@28 -- # version=24.9rc0 00:06:50.430 09:21:22 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:50.430 09:21:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:50.430 09:21:22 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:50.430 09:21:22 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:50.430 00:06:50.430 real 0m0.179s 00:06:50.430 user 0m0.087s 00:06:50.430 sys 0m0.131s 00:06:50.430 09:21:22 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:50.430 09:21:22 version -- common/autotest_common.sh@10 -- # set +x 00:06:50.430 ************************************ 00:06:50.430 END TEST version 00:06:50.430 ************************************ 00:06:50.430 09:21:22 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:50.430 09:21:22 -- spdk/autotest.sh@198 -- # uname -s 00:06:50.430 09:21:22 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:50.430 09:21:22 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:50.430 09:21:22 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:50.430 09:21:22 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:50.430 09:21:22 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:50.430 09:21:22 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:50.430 09:21:22 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:50.430 09:21:22 -- common/autotest_common.sh@10 -- # set +x 00:06:50.698 09:21:22 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:50.698 09:21:22 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:50.698 09:21:22 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:50.698 09:21:22 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:50.698 09:21:22 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:50.698 09:21:22 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:50.698 09:21:22 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:50.698 09:21:22 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:50.698 09:21:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:50.698 09:21:22 -- common/autotest_common.sh@10 -- # set +x 00:06:50.698 ************************************ 00:06:50.698 START TEST nvmf_tcp 00:06:50.698 ************************************ 00:06:50.698 09:21:22 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:50.698 * Looking for test storage... 00:06:50.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.698 09:21:22 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.698 09:21:22 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.698 09:21:22 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.698 09:21:22 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.698 09:21:22 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.698 09:21:22 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.698 09:21:22 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:50.698 09:21:22 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:50.698 09:21:22 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.699 09:21:22 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.699 09:21:22 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.699 09:21:22 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:50.699 09:21:22 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:50.699 09:21:22 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:50.699 09:21:22 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:50.699 09:21:22 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:50.699 09:21:22 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:50.699 09:21:22 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:50.699 09:21:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.699 09:21:22 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:50.699 09:21:22 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:50.699 09:21:22 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:50.699 09:21:22 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:50.699 09:21:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.699 ************************************ 00:06:50.699 START TEST nvmf_example 00:06:50.699 ************************************ 00:06:50.699 09:21:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:50.959 * Looking for test storage... 00:06:50.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.959 09:21:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.959 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:50.960 09:21:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:57.551 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:57.552 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:57.552 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:57.552 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:57.552 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:57.552 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:57.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:57.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:06:57.814 00:06:57.814 --- 10.0.0.2 ping statistics --- 00:06:57.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.814 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:57.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:57.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:06:57.814 00:06:57.814 --- 10.0.0.1 ping statistics --- 00:06:57.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.814 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=916812 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 916812 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 916812 ']' 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:57.814 09:21:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:58.075 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.018 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:59.018 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:06:59.018 09:21:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:59.018 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:59.018 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:59.018 09:21:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:59.018 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.018 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:59.018 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.018 09:21:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:59.019 09:21:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:59.019 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.253 Initializing NVMe Controllers 00:07:11.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:11.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:11.253 Initialization complete. Launching workers. 00:07:11.253 ======================================================== 00:07:11.253 Latency(us) 00:07:11.253 Device Information : IOPS MiB/s Average min max 00:07:11.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16580.38 64.77 3860.39 836.00 15416.79 00:07:11.253 ======================================================== 00:07:11.253 Total : 16580.38 64.77 3860.39 836.00 15416.79 00:07:11.253 00:07:11.253 09:21:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:11.253 09:21:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:11.253 09:21:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:11.253 09:21:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:11.253 09:21:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:11.253 09:21:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:11.253 09:21:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:11.253 09:21:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:11.253 rmmod nvme_tcp 00:07:11.253 rmmod nvme_fabrics 00:07:11.253 rmmod nvme_keyring 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 916812 ']' 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 916812 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 916812 ']' 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 916812 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 916812 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 916812' 00:07:11.253 killing process with pid 916812 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 916812 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 916812 00:07:11.253 nvmf threads initialize successfully 00:07:11.253 bdev subsystem init successfully 00:07:11.253 created a nvmf target service 00:07:11.253 create targets's poll groups done 00:07:11.253 all subsystems of target started 00:07:11.253 nvmf target is running 00:07:11.253 all subsystems of target stopped 00:07:11.253 destroy targets's poll groups done 00:07:11.253 destroyed the nvmf target service 00:07:11.253 bdev subsystem finish successfully 00:07:11.253 nvmf threads destroy successfully 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.253 09:21:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.514 09:21:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:11.514 09:21:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:11.514 09:21:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:11.514 09:21:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:11.514 00:07:11.514 real 0m20.849s 00:07:11.514 user 0m46.869s 00:07:11.514 sys 0m6.421s 00:07:11.514 09:21:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:11.515 09:21:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:11.515 ************************************ 00:07:11.515 END TEST nvmf_example 00:07:11.515 ************************************ 00:07:11.779 09:21:43 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:11.779 09:21:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:11.779 09:21:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:11.779 09:21:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:11.779 ************************************ 00:07:11.779 START TEST nvmf_filesystem 00:07:11.779 ************************************ 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:11.779 * Looking for test storage... 00:07:11.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:11.779 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:11.780 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:11.780 #define SPDK_CONFIG_H 00:07:11.780 #define SPDK_CONFIG_APPS 1 00:07:11.780 #define SPDK_CONFIG_ARCH native 00:07:11.780 #undef SPDK_CONFIG_ASAN 00:07:11.780 #undef SPDK_CONFIG_AVAHI 00:07:11.780 #undef SPDK_CONFIG_CET 00:07:11.780 #define SPDK_CONFIG_COVERAGE 1 00:07:11.780 #define SPDK_CONFIG_CROSS_PREFIX 00:07:11.780 #undef SPDK_CONFIG_CRYPTO 00:07:11.780 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:11.780 #undef SPDK_CONFIG_CUSTOMOCF 00:07:11.780 #undef SPDK_CONFIG_DAOS 00:07:11.780 #define SPDK_CONFIG_DAOS_DIR 00:07:11.780 #define SPDK_CONFIG_DEBUG 1 00:07:11.780 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:11.780 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:11.780 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:11.780 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:11.780 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:11.780 #undef SPDK_CONFIG_DPDK_UADK 00:07:11.780 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:11.780 #define SPDK_CONFIG_EXAMPLES 1 00:07:11.780 #undef SPDK_CONFIG_FC 00:07:11.780 #define SPDK_CONFIG_FC_PATH 00:07:11.780 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:11.780 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:11.780 #undef SPDK_CONFIG_FUSE 00:07:11.780 #undef SPDK_CONFIG_FUZZER 00:07:11.780 #define SPDK_CONFIG_FUZZER_LIB 00:07:11.780 #undef SPDK_CONFIG_GOLANG 00:07:11.780 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:11.780 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:11.780 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:11.780 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:11.780 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:11.780 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:11.780 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:11.780 #define SPDK_CONFIG_IDXD 1 00:07:11.780 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:11.780 #undef SPDK_CONFIG_IPSEC_MB 00:07:11.780 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:11.780 #define SPDK_CONFIG_ISAL 1 00:07:11.780 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:11.780 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:11.780 #define SPDK_CONFIG_LIBDIR 00:07:11.780 #undef SPDK_CONFIG_LTO 00:07:11.780 #define SPDK_CONFIG_MAX_LCORES 00:07:11.780 #define SPDK_CONFIG_NVME_CUSE 1 00:07:11.780 #undef SPDK_CONFIG_OCF 00:07:11.780 #define SPDK_CONFIG_OCF_PATH 00:07:11.780 #define SPDK_CONFIG_OPENSSL_PATH 00:07:11.780 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:11.780 #define SPDK_CONFIG_PGO_DIR 00:07:11.780 #undef SPDK_CONFIG_PGO_USE 00:07:11.780 #define SPDK_CONFIG_PREFIX /usr/local 00:07:11.780 #undef SPDK_CONFIG_RAID5F 00:07:11.780 #undef SPDK_CONFIG_RBD 00:07:11.780 #define SPDK_CONFIG_RDMA 1 00:07:11.780 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:11.780 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:11.780 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:11.780 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:11.780 #define SPDK_CONFIG_SHARED 1 00:07:11.780 #undef SPDK_CONFIG_SMA 00:07:11.780 #define SPDK_CONFIG_TESTS 1 00:07:11.780 #undef SPDK_CONFIG_TSAN 00:07:11.780 #define SPDK_CONFIG_UBLK 1 00:07:11.781 #define SPDK_CONFIG_UBSAN 1 00:07:11.781 #undef SPDK_CONFIG_UNIT_TESTS 00:07:11.781 #undef SPDK_CONFIG_URING 00:07:11.781 #define SPDK_CONFIG_URING_PATH 00:07:11.781 #undef SPDK_CONFIG_URING_ZNS 00:07:11.781 #undef SPDK_CONFIG_USDT 00:07:11.781 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:11.781 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:11.781 #define SPDK_CONFIG_VFIO_USER 1 00:07:11.781 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:11.781 #define SPDK_CONFIG_VHOST 1 00:07:11.781 #define SPDK_CONFIG_VIRTIO 1 00:07:11.781 #undef SPDK_CONFIG_VTUNE 00:07:11.781 #define SPDK_CONFIG_VTUNE_DIR 00:07:11.781 #define SPDK_CONFIG_WERROR 1 00:07:11.781 #define SPDK_CONFIG_WPDK_DIR 00:07:11.781 #undef SPDK_CONFIG_XNVME 00:07:11.781 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:11.781 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:11.782 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 919644 ]] 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 919644 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Mh3qWD 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Mh3qWD/tests/target /tmp/spdk.Mh3qWD 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956665856 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4327763968 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118704803840 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370968064 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10666164224 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680771584 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685481984 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864495104 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874194432 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684589056 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685486080 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=897024 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937089024 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937093120 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:11.783 * Looking for test storage... 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.783 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118704803840 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12880756736 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.044 09:21:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:12.045 09:21:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:18.657 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:18.657 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:18.657 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:18.657 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:18.657 09:21:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.657 09:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.657 09:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.657 09:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:18.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:07:18.657 00:07:18.657 --- 10.0.0.2 ping statistics --- 00:07:18.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.657 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:07:18.657 09:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:07:18.657 00:07:18.657 --- 10.0.0.1 ping statistics --- 00:07:18.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.657 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.658 ************************************ 00:07:18.658 START TEST nvmf_filesystem_no_in_capsule 00:07:18.658 ************************************ 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=923198 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 923198 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 923198 ']' 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.658 09:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:18.658 [2024-06-11 09:21:50.236619] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:07:18.658 [2024-06-11 09:21:50.236674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.658 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.658 [2024-06-11 09:21:50.322372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.658 [2024-06-11 09:21:50.424241] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.658 [2024-06-11 09:21:50.424295] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.658 [2024-06-11 09:21:50.424304] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.658 [2024-06-11 09:21:50.424311] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.658 [2024-06-11 09:21:50.424325] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.658 [2024-06-11 09:21:50.424489] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.658 [2024-06-11 09:21:50.424729] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.658 [2024-06-11 09:21:50.424899] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.658 [2024-06-11 09:21:50.424901] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.598 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:19.598 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:19.598 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:19.598 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:19.598 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.599 [2024-06-11 09:21:51.107945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.599 Malloc1 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.599 [2024-06-11 09:21:51.242000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:19.599 { 00:07:19.599 "name": "Malloc1", 00:07:19.599 "aliases": [ 00:07:19.599 "7931071f-05cc-4cf0-9ca2-54c294deba09" 00:07:19.599 ], 00:07:19.599 "product_name": "Malloc disk", 00:07:19.599 "block_size": 512, 00:07:19.599 "num_blocks": 1048576, 00:07:19.599 "uuid": "7931071f-05cc-4cf0-9ca2-54c294deba09", 00:07:19.599 "assigned_rate_limits": { 00:07:19.599 "rw_ios_per_sec": 0, 00:07:19.599 "rw_mbytes_per_sec": 0, 00:07:19.599 "r_mbytes_per_sec": 0, 00:07:19.599 "w_mbytes_per_sec": 0 00:07:19.599 }, 00:07:19.599 "claimed": true, 00:07:19.599 "claim_type": "exclusive_write", 00:07:19.599 "zoned": false, 00:07:19.599 "supported_io_types": { 00:07:19.599 "read": true, 00:07:19.599 "write": true, 00:07:19.599 "unmap": true, 00:07:19.599 "write_zeroes": true, 00:07:19.599 "flush": true, 00:07:19.599 "reset": true, 00:07:19.599 "compare": false, 00:07:19.599 "compare_and_write": false, 00:07:19.599 "abort": true, 00:07:19.599 "nvme_admin": false, 00:07:19.599 "nvme_io": false 00:07:19.599 }, 00:07:19.599 "memory_domains": [ 00:07:19.599 { 00:07:19.599 "dma_device_id": "system", 00:07:19.599 "dma_device_type": 1 00:07:19.599 }, 00:07:19.599 { 00:07:19.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.599 "dma_device_type": 2 00:07:19.599 } 00:07:19.599 ], 00:07:19.599 "driver_specific": {} 00:07:19.599 } 00:07:19.599 ]' 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:19.599 09:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:21.510 09:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:21.510 09:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:21.510 09:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:21.510 09:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:21.510 09:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:23.423 09:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:23.423 09:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:23.993 09:21:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.936 ************************************ 00:07:24.936 START TEST filesystem_ext4 00:07:24.936 ************************************ 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:24.936 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:24.936 mke2fs 1.46.5 (30-Dec-2021) 00:07:24.936 Discarding device blocks: 0/522240 done 00:07:24.936 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:24.936 Filesystem UUID: 18c70d73-258d-4e8e-b8e2-ac84d1be5ccd 00:07:24.936 Superblock backups stored on blocks: 00:07:24.936 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:24.936 00:07:24.936 Allocating group tables: 0/64 done 00:07:24.936 Writing inode tables: 0/64 done 00:07:25.197 Creating journal (8192 blocks): done 00:07:25.197 Writing superblocks and filesystem accounting information: 0/64 done 00:07:25.197 00:07:25.197 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:25.197 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:25.197 09:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:25.458 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:25.458 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:25.458 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:25.458 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:25.458 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:25.458 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 923198 00:07:25.458 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:25.458 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:25.458 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:25.458 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:25.458 00:07:25.458 real 0m0.461s 00:07:25.458 user 0m0.024s 00:07:25.458 sys 0m0.047s 00:07:25.458 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:25.458 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:25.458 ************************************ 00:07:25.458 END TEST filesystem_ext4 00:07:25.458 ************************************ 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.459 ************************************ 00:07:25.459 START TEST filesystem_btrfs 00:07:25.459 ************************************ 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:25.459 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:26.030 btrfs-progs v6.6.2 00:07:26.030 See https://btrfs.readthedocs.io for more information. 00:07:26.030 00:07:26.030 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:26.030 NOTE: several default settings have changed in version 5.15, please make sure 00:07:26.030 this does not affect your deployments: 00:07:26.030 - DUP for metadata (-m dup) 00:07:26.030 - enabled no-holes (-O no-holes) 00:07:26.030 - enabled free-space-tree (-R free-space-tree) 00:07:26.030 00:07:26.030 Label: (null) 00:07:26.030 UUID: d65f682a-4e45-4618-9c84-f2690f214741 00:07:26.030 Node size: 16384 00:07:26.030 Sector size: 4096 00:07:26.030 Filesystem size: 510.00MiB 00:07:26.030 Block group profiles: 00:07:26.030 Data: single 8.00MiB 00:07:26.030 Metadata: DUP 32.00MiB 00:07:26.030 System: DUP 8.00MiB 00:07:26.030 SSD detected: yes 00:07:26.030 Zoned device: no 00:07:26.030 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:26.030 Runtime features: free-space-tree 00:07:26.030 Checksum: crc32c 00:07:26.030 Number of devices: 1 00:07:26.030 Devices: 00:07:26.030 ID SIZE PATH 00:07:26.030 1 510.00MiB /dev/nvme0n1p1 00:07:26.030 00:07:26.030 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:26.030 09:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 923198 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:26.973 00:07:26.973 real 0m1.302s 00:07:26.973 user 0m0.021s 00:07:26.973 sys 0m0.064s 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:26.973 ************************************ 00:07:26.973 END TEST filesystem_btrfs 00:07:26.973 ************************************ 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.973 ************************************ 00:07:26.973 START TEST filesystem_xfs 00:07:26.973 ************************************ 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:26.973 09:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:26.973 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:26.973 = sectsz=512 attr=2, projid32bit=1 00:07:26.973 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:26.973 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:26.973 data = bsize=4096 blocks=130560, imaxpct=25 00:07:26.973 = sunit=0 swidth=0 blks 00:07:26.973 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:26.973 log =internal log bsize=4096 blocks=16384, version=2 00:07:26.973 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:26.973 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:27.915 Discarding blocks...Done. 00:07:27.915 09:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:27.915 09:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.827 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 923198 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.088 00:07:30.088 real 0m3.161s 00:07:30.088 user 0m0.024s 00:07:30.088 sys 0m0.055s 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:30.088 ************************************ 00:07:30.088 END TEST filesystem_xfs 00:07:30.088 ************************************ 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:30.088 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:30.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 923198 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 923198 ']' 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 923198 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:30.350 09:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 923198 00:07:30.350 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:30.350 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:30.350 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 923198' 00:07:30.350 killing process with pid 923198 00:07:30.350 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 923198 00:07:30.350 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 923198 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:30.610 00:07:30.610 real 0m12.082s 00:07:30.610 user 0m47.431s 00:07:30.610 sys 0m1.093s 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.610 ************************************ 00:07:30.610 END TEST nvmf_filesystem_no_in_capsule 00:07:30.610 ************************************ 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.610 ************************************ 00:07:30.610 START TEST nvmf_filesystem_in_capsule 00:07:30.610 ************************************ 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=925841 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 925841 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 925841 ']' 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:30.610 09:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.610 [2024-06-11 09:22:02.397573] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:07:30.610 [2024-06-11 09:22:02.397617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.871 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.871 [2024-06-11 09:22:02.479401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.871 [2024-06-11 09:22:02.544427] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.871 [2024-06-11 09:22:02.544462] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.871 [2024-06-11 09:22:02.544469] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.871 [2024-06-11 09:22:02.544476] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.871 [2024-06-11 09:22:02.544481] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.871 [2024-06-11 09:22:02.544619] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.871 [2024-06-11 09:22:02.544736] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.871 [2024-06-11 09:22:02.544895] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.871 [2024-06-11 09:22:02.544897] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.814 [2024-06-11 09:22:03.307159] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.814 Malloc1 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.814 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.815 [2024-06-11 09:22:03.438372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:31.815 { 00:07:31.815 "name": "Malloc1", 00:07:31.815 "aliases": [ 00:07:31.815 "06de1a3c-240e-4704-ae0d-08b15c685519" 00:07:31.815 ], 00:07:31.815 "product_name": "Malloc disk", 00:07:31.815 "block_size": 512, 00:07:31.815 "num_blocks": 1048576, 00:07:31.815 "uuid": "06de1a3c-240e-4704-ae0d-08b15c685519", 00:07:31.815 "assigned_rate_limits": { 00:07:31.815 "rw_ios_per_sec": 0, 00:07:31.815 "rw_mbytes_per_sec": 0, 00:07:31.815 "r_mbytes_per_sec": 0, 00:07:31.815 "w_mbytes_per_sec": 0 00:07:31.815 }, 00:07:31.815 "claimed": true, 00:07:31.815 "claim_type": "exclusive_write", 00:07:31.815 "zoned": false, 00:07:31.815 "supported_io_types": { 00:07:31.815 "read": true, 00:07:31.815 "write": true, 00:07:31.815 "unmap": true, 00:07:31.815 "write_zeroes": true, 00:07:31.815 "flush": true, 00:07:31.815 "reset": true, 00:07:31.815 "compare": false, 00:07:31.815 "compare_and_write": false, 00:07:31.815 "abort": true, 00:07:31.815 "nvme_admin": false, 00:07:31.815 "nvme_io": false 00:07:31.815 }, 00:07:31.815 "memory_domains": [ 00:07:31.815 { 00:07:31.815 "dma_device_id": "system", 00:07:31.815 "dma_device_type": 1 00:07:31.815 }, 00:07:31.815 { 00:07:31.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.815 "dma_device_type": 2 00:07:31.815 } 00:07:31.815 ], 00:07:31.815 "driver_specific": {} 00:07:31.815 } 00:07:31.815 ]' 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:31.815 09:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:33.200 09:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:33.200 09:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:33.200 09:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:33.200 09:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:33.200 09:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:35.782 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:36.043 09:22:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:36.986 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:36.986 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:36.986 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:36.986 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:36.986 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.246 ************************************ 00:07:37.246 START TEST filesystem_in_capsule_ext4 00:07:37.246 ************************************ 00:07:37.246 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:37.246 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:37.246 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.246 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:37.246 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:37.246 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:37.246 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:37.246 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:37.246 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:37.246 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:37.246 09:22:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:37.246 mke2fs 1.46.5 (30-Dec-2021) 00:07:37.246 Discarding device blocks: 0/522240 done 00:07:37.246 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:37.246 Filesystem UUID: 4c95d9e2-23f4-41bb-8bb3-fe69a7fc9084 00:07:37.246 Superblock backups stored on blocks: 00:07:37.246 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:37.246 00:07:37.246 Allocating group tables: 0/64 done 00:07:37.246 Writing inode tables: 0/64 done 00:07:37.565 Creating journal (8192 blocks): done 00:07:37.565 Writing superblocks and filesystem accounting information: 0/64 done 00:07:37.565 00:07:37.565 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:37.565 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.565 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.826 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:37.826 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 925841 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.827 00:07:37.827 real 0m0.608s 00:07:37.827 user 0m0.021s 00:07:37.827 sys 0m0.049s 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:37.827 ************************************ 00:07:37.827 END TEST filesystem_in_capsule_ext4 00:07:37.827 ************************************ 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.827 ************************************ 00:07:37.827 START TEST filesystem_in_capsule_btrfs 00:07:37.827 ************************************ 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:37.827 btrfs-progs v6.6.2 00:07:37.827 See https://btrfs.readthedocs.io for more information. 00:07:37.827 00:07:37.827 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:37.827 NOTE: several default settings have changed in version 5.15, please make sure 00:07:37.827 this does not affect your deployments: 00:07:37.827 - DUP for metadata (-m dup) 00:07:37.827 - enabled no-holes (-O no-holes) 00:07:37.827 - enabled free-space-tree (-R free-space-tree) 00:07:37.827 00:07:37.827 Label: (null) 00:07:37.827 UUID: 7b45ee47-1a09-4439-ac17-7d9b46fc0c83 00:07:37.827 Node size: 16384 00:07:37.827 Sector size: 4096 00:07:37.827 Filesystem size: 510.00MiB 00:07:37.827 Block group profiles: 00:07:37.827 Data: single 8.00MiB 00:07:37.827 Metadata: DUP 32.00MiB 00:07:37.827 System: DUP 8.00MiB 00:07:37.827 SSD detected: yes 00:07:37.827 Zoned device: no 00:07:37.827 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:37.827 Runtime features: free-space-tree 00:07:37.827 Checksum: crc32c 00:07:37.827 Number of devices: 1 00:07:37.827 Devices: 00:07:37.827 ID SIZE PATH 00:07:37.827 1 510.00MiB /dev/nvme0n1p1 00:07:37.827 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:37.827 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.397 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.398 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:38.398 09:22:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 925841 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.398 00:07:38.398 real 0m0.518s 00:07:38.398 user 0m0.036s 00:07:38.398 sys 0m0.054s 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:38.398 ************************************ 00:07:38.398 END TEST filesystem_in_capsule_btrfs 00:07:38.398 ************************************ 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.398 ************************************ 00:07:38.398 START TEST filesystem_in_capsule_xfs 00:07:38.398 ************************************ 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:38.398 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:38.398 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:38.398 = sectsz=512 attr=2, projid32bit=1 00:07:38.398 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:38.398 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:38.398 data = bsize=4096 blocks=130560, imaxpct=25 00:07:38.398 = sunit=0 swidth=0 blks 00:07:38.398 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:38.398 log =internal log bsize=4096 blocks=16384, version=2 00:07:38.398 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:38.398 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:39.345 Discarding blocks...Done. 00:07:39.345 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:39.346 09:22:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 925841 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.889 00:07:41.889 real 0m3.091s 00:07:41.889 user 0m0.024s 00:07:41.889 sys 0m0.054s 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:41.889 ************************************ 00:07:41.889 END TEST filesystem_in_capsule_xfs 00:07:41.889 ************************************ 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:41.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:41.889 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 925841 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 925841 ']' 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 925841 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 925841 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 925841' 00:07:41.890 killing process with pid 925841 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 925841 00:07:41.890 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 925841 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:42.151 00:07:42.151 real 0m11.484s 00:07:42.151 user 0m45.230s 00:07:42.151 sys 0m1.066s 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.151 ************************************ 00:07:42.151 END TEST nvmf_filesystem_in_capsule 00:07:42.151 ************************************ 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.151 rmmod nvme_tcp 00:07:42.151 rmmod nvme_fabrics 00:07:42.151 rmmod nvme_keyring 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.151 09:22:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.696 09:22:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:44.696 00:07:44.696 real 0m32.602s 00:07:44.696 user 1m34.508s 00:07:44.696 sys 0m7.237s 00:07:44.696 09:22:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:44.696 09:22:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.696 ************************************ 00:07:44.696 END TEST nvmf_filesystem 00:07:44.696 ************************************ 00:07:44.696 09:22:16 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:44.696 09:22:16 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:44.696 09:22:16 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:44.696 09:22:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:44.696 ************************************ 00:07:44.696 START TEST nvmf_target_discovery 00:07:44.696 ************************************ 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:44.696 * Looking for test storage... 00:07:44.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:44.696 09:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.288 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:51.288 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:51.289 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:51.289 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:51.289 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.289 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:51.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.773 ms 00:07:51.550 00:07:51.550 --- 10.0.0.2 ping statistics --- 00:07:51.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.550 rtt min/avg/max/mdev = 0.773/0.773/0.773/0.000 ms 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:07:51.550 00:07:51.550 --- 10.0.0.1 ping statistics --- 00:07:51.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.550 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:51.550 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:51.812 09:22:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:51.812 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.812 09:22:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:51.812 09:22:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.812 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=932847 00:07:51.812 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 932847 00:07:51.812 09:22:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.812 09:22:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 932847 ']' 00:07:51.812 09:22:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.812 09:22:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:51.812 09:22:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.812 09:22:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:51.812 09:22:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.812 [2024-06-11 09:22:23.462687] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:07:51.812 [2024-06-11 09:22:23.462747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.812 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.812 [2024-06-11 09:22:23.550176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.071 [2024-06-11 09:22:23.651822] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.071 [2024-06-11 09:22:23.651873] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.071 [2024-06-11 09:22:23.651881] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.071 [2024-06-11 09:22:23.651888] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.071 [2024-06-11 09:22:23.651894] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.071 [2024-06-11 09:22:23.652076] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.071 [2024-06-11 09:22:23.652208] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.071 [2024-06-11 09:22:23.652403] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.071 [2024-06-11 09:22:23.652404] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.642 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:52.642 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:07:52.642 09:22:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.642 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:52.642 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.642 09:22:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.642 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.642 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.643 [2024-06-11 09:22:24.380027] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.643 Null1 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.643 [2024-06-11 09:22:24.440325] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.643 Null2 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:52.643 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.905 Null3 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.905 Null4 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.905 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:07:53.167 00:07:53.167 Discovery Log Number of Records 6, Generation counter 6 00:07:53.167 =====Discovery Log Entry 0====== 00:07:53.167 trtype: tcp 00:07:53.167 adrfam: ipv4 00:07:53.167 subtype: current discovery subsystem 00:07:53.167 treq: not required 00:07:53.167 portid: 0 00:07:53.167 trsvcid: 4420 00:07:53.167 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:53.167 traddr: 10.0.0.2 00:07:53.167 eflags: explicit discovery connections, duplicate discovery information 00:07:53.167 sectype: none 00:07:53.167 =====Discovery Log Entry 1====== 00:07:53.167 trtype: tcp 00:07:53.167 adrfam: ipv4 00:07:53.167 subtype: nvme subsystem 00:07:53.167 treq: not required 00:07:53.167 portid: 0 00:07:53.167 trsvcid: 4420 00:07:53.167 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:53.167 traddr: 10.0.0.2 00:07:53.167 eflags: none 00:07:53.167 sectype: none 00:07:53.167 =====Discovery Log Entry 2====== 00:07:53.167 trtype: tcp 00:07:53.167 adrfam: ipv4 00:07:53.167 subtype: nvme subsystem 00:07:53.167 treq: not required 00:07:53.167 portid: 0 00:07:53.167 trsvcid: 4420 00:07:53.167 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:53.167 traddr: 10.0.0.2 00:07:53.167 eflags: none 00:07:53.167 sectype: none 00:07:53.167 =====Discovery Log Entry 3====== 00:07:53.167 trtype: tcp 00:07:53.167 adrfam: ipv4 00:07:53.167 subtype: nvme subsystem 00:07:53.167 treq: not required 00:07:53.167 portid: 0 00:07:53.167 trsvcid: 4420 00:07:53.167 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:53.167 traddr: 10.0.0.2 00:07:53.167 eflags: none 00:07:53.167 sectype: none 00:07:53.167 =====Discovery Log Entry 4====== 00:07:53.167 trtype: tcp 00:07:53.167 adrfam: ipv4 00:07:53.167 subtype: nvme subsystem 00:07:53.167 treq: not required 00:07:53.167 portid: 0 00:07:53.167 trsvcid: 4420 00:07:53.167 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:53.167 traddr: 10.0.0.2 00:07:53.167 eflags: none 00:07:53.167 sectype: none 00:07:53.167 =====Discovery Log Entry 5====== 00:07:53.167 trtype: tcp 00:07:53.167 adrfam: ipv4 00:07:53.167 subtype: discovery subsystem referral 00:07:53.167 treq: not required 00:07:53.167 portid: 0 00:07:53.167 trsvcid: 4430 00:07:53.167 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:53.167 traddr: 10.0.0.2 00:07:53.167 eflags: none 00:07:53.167 sectype: none 00:07:53.167 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:53.167 Perform nvmf subsystem discovery via RPC 00:07:53.167 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:53.167 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.167 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.167 [ 00:07:53.167 { 00:07:53.167 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:53.167 "subtype": "Discovery", 00:07:53.167 "listen_addresses": [ 00:07:53.167 { 00:07:53.167 "trtype": "TCP", 00:07:53.167 "adrfam": "IPv4", 00:07:53.167 "traddr": "10.0.0.2", 00:07:53.167 "trsvcid": "4420" 00:07:53.167 } 00:07:53.167 ], 00:07:53.167 "allow_any_host": true, 00:07:53.167 "hosts": [] 00:07:53.167 }, 00:07:53.167 { 00:07:53.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.167 "subtype": "NVMe", 00:07:53.167 "listen_addresses": [ 00:07:53.167 { 00:07:53.167 "trtype": "TCP", 00:07:53.167 "adrfam": "IPv4", 00:07:53.167 "traddr": "10.0.0.2", 00:07:53.167 "trsvcid": "4420" 00:07:53.167 } 00:07:53.167 ], 00:07:53.167 "allow_any_host": true, 00:07:53.167 "hosts": [], 00:07:53.167 "serial_number": "SPDK00000000000001", 00:07:53.167 "model_number": "SPDK bdev Controller", 00:07:53.167 "max_namespaces": 32, 00:07:53.167 "min_cntlid": 1, 00:07:53.168 "max_cntlid": 65519, 00:07:53.168 "namespaces": [ 00:07:53.168 { 00:07:53.168 "nsid": 1, 00:07:53.168 "bdev_name": "Null1", 00:07:53.168 "name": "Null1", 00:07:53.168 "nguid": "437C4DEC7AA844368C8DB3BBB0F151BE", 00:07:53.168 "uuid": "437c4dec-7aa8-4436-8c8d-b3bbb0f151be" 00:07:53.168 } 00:07:53.168 ] 00:07:53.168 }, 00:07:53.168 { 00:07:53.168 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:53.168 "subtype": "NVMe", 00:07:53.168 "listen_addresses": [ 00:07:53.168 { 00:07:53.168 "trtype": "TCP", 00:07:53.168 "adrfam": "IPv4", 00:07:53.168 "traddr": "10.0.0.2", 00:07:53.168 "trsvcid": "4420" 00:07:53.168 } 00:07:53.168 ], 00:07:53.168 "allow_any_host": true, 00:07:53.168 "hosts": [], 00:07:53.168 "serial_number": "SPDK00000000000002", 00:07:53.168 "model_number": "SPDK bdev Controller", 00:07:53.168 "max_namespaces": 32, 00:07:53.168 "min_cntlid": 1, 00:07:53.168 "max_cntlid": 65519, 00:07:53.168 "namespaces": [ 00:07:53.168 { 00:07:53.168 "nsid": 1, 00:07:53.168 "bdev_name": "Null2", 00:07:53.168 "name": "Null2", 00:07:53.168 "nguid": "B444140EDA2144A0880CCE5FB7F48874", 00:07:53.168 "uuid": "b444140e-da21-44a0-880c-ce5fb7f48874" 00:07:53.168 } 00:07:53.168 ] 00:07:53.168 }, 00:07:53.168 { 00:07:53.168 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:53.168 "subtype": "NVMe", 00:07:53.168 "listen_addresses": [ 00:07:53.168 { 00:07:53.168 "trtype": "TCP", 00:07:53.168 "adrfam": "IPv4", 00:07:53.168 "traddr": "10.0.0.2", 00:07:53.168 "trsvcid": "4420" 00:07:53.168 } 00:07:53.168 ], 00:07:53.168 "allow_any_host": true, 00:07:53.168 "hosts": [], 00:07:53.168 "serial_number": "SPDK00000000000003", 00:07:53.168 "model_number": "SPDK bdev Controller", 00:07:53.168 "max_namespaces": 32, 00:07:53.168 "min_cntlid": 1, 00:07:53.168 "max_cntlid": 65519, 00:07:53.168 "namespaces": [ 00:07:53.168 { 00:07:53.168 "nsid": 1, 00:07:53.168 "bdev_name": "Null3", 00:07:53.168 "name": "Null3", 00:07:53.168 "nguid": "CE1B21B5A9A3409E9C53E515E082AF5C", 00:07:53.168 "uuid": "ce1b21b5-a9a3-409e-9c53-e515e082af5c" 00:07:53.168 } 00:07:53.168 ] 00:07:53.168 }, 00:07:53.168 { 00:07:53.168 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:53.168 "subtype": "NVMe", 00:07:53.168 "listen_addresses": [ 00:07:53.168 { 00:07:53.168 "trtype": "TCP", 00:07:53.168 "adrfam": "IPv4", 00:07:53.168 "traddr": "10.0.0.2", 00:07:53.168 "trsvcid": "4420" 00:07:53.168 } 00:07:53.168 ], 00:07:53.168 "allow_any_host": true, 00:07:53.168 "hosts": [], 00:07:53.168 "serial_number": "SPDK00000000000004", 00:07:53.168 "model_number": "SPDK bdev Controller", 00:07:53.168 "max_namespaces": 32, 00:07:53.168 "min_cntlid": 1, 00:07:53.168 "max_cntlid": 65519, 00:07:53.168 "namespaces": [ 00:07:53.168 { 00:07:53.168 "nsid": 1, 00:07:53.168 "bdev_name": "Null4", 00:07:53.168 "name": "Null4", 00:07:53.168 "nguid": "7103E26D236B4044B4D3B7070E7FF9D9", 00:07:53.168 "uuid": "7103e26d-236b-4044-b4d3-b7070e7ff9d9" 00:07:53.168 } 00:07:53.168 ] 00:07:53.168 } 00:07:53.168 ] 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.168 09:22:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:53.168 rmmod nvme_tcp 00:07:53.168 rmmod nvme_fabrics 00:07:53.168 rmmod nvme_keyring 00:07:53.429 09:22:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.429 09:22:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:53.429 09:22:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:53.429 09:22:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 932847 ']' 00:07:53.429 09:22:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 932847 00:07:53.429 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 932847 ']' 00:07:53.429 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 932847 00:07:53.429 09:22:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 932847 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 932847' 00:07:53.429 killing process with pid 932847 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 932847 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 932847 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.429 09:22:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.037 09:22:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:56.037 00:07:56.037 real 0m11.176s 00:07:56.037 user 0m8.430s 00:07:56.037 sys 0m5.718s 00:07:56.037 09:22:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:56.037 09:22:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:56.037 ************************************ 00:07:56.037 END TEST nvmf_target_discovery 00:07:56.037 ************************************ 00:07:56.037 09:22:27 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:56.037 09:22:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:56.037 09:22:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:56.037 09:22:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:56.037 ************************************ 00:07:56.037 START TEST nvmf_referrals 00:07:56.037 ************************************ 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:56.037 * Looking for test storage... 00:07:56.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:56.037 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:56.038 09:22:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:02.638 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:02.638 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:02.638 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:02.638 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.638 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.639 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:08:02.900 00:08:02.900 --- 10.0.0.2 ping statistics --- 00:08:02.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.900 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:08:02.900 00:08:02.900 --- 10.0.0.1 ping statistics --- 00:08:02.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.900 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=937605 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 937605 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 937605 ']' 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:02.900 09:22:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.900 [2024-06-11 09:22:34.675769] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:08:02.900 [2024-06-11 09:22:34.675834] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.900 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.161 [2024-06-11 09:22:34.761736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.161 [2024-06-11 09:22:34.858272] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.161 [2024-06-11 09:22:34.858337] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.161 [2024-06-11 09:22:34.858345] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.161 [2024-06-11 09:22:34.858353] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.161 [2024-06-11 09:22:34.858359] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.161 [2024-06-11 09:22:34.858441] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.161 [2024-06-11 09:22:34.858572] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.161 [2024-06-11 09:22:34.858736] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.161 [2024-06-11 09:22:34.858737] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.104 [2024-06-11 09:22:35.610127] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.104 [2024-06-11 09:22:35.626308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.104 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.105 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.367 09:22:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.367 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:04.628 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:04.628 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:04.628 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:04.628 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:04.628 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:04.628 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.628 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:04.628 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:04.628 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:04.628 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:04.628 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:04.628 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.628 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.890 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:05.152 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:05.152 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:05.152 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:05.152 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:05.152 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:05.152 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.152 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:05.152 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:05.152 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:05.152 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:05.152 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:05.152 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.152 09:22:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:05.413 rmmod nvme_tcp 00:08:05.413 rmmod nvme_fabrics 00:08:05.413 rmmod nvme_keyring 00:08:05.413 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 937605 ']' 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 937605 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 937605 ']' 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 937605 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 937605 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 937605' 00:08:05.675 killing process with pid 937605 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 937605 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 937605 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.675 09:22:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.226 09:22:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:08.226 00:08:08.226 real 0m12.159s 00:08:08.226 user 0m13.276s 00:08:08.226 sys 0m5.890s 00:08:08.226 09:22:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:08.226 09:22:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.226 ************************************ 00:08:08.226 END TEST nvmf_referrals 00:08:08.226 ************************************ 00:08:08.226 09:22:39 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:08.226 09:22:39 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:08.226 09:22:39 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:08.226 09:22:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.226 ************************************ 00:08:08.226 START TEST nvmf_connect_disconnect 00:08:08.226 ************************************ 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:08.226 * Looking for test storage... 00:08:08.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.226 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:08.227 09:22:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:14.823 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:14.824 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:14.824 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:14.824 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:14.824 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:14.824 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.084 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.084 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.084 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:15.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:08:15.084 00:08:15.084 --- 10.0.0.2 ping statistics --- 00:08:15.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.084 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:08:15.084 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:08:15.084 00:08:15.084 --- 10.0.0.1 ping statistics --- 00:08:15.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.084 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:08:15.084 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.084 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=942401 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 942401 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 942401 ']' 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:15.085 09:22:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.085 [2024-06-11 09:22:46.817460] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:08:15.085 [2024-06-11 09:22:46.817510] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.085 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.345 [2024-06-11 09:22:46.900948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.345 [2024-06-11 09:22:46.970140] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.345 [2024-06-11 09:22:46.970184] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.345 [2024-06-11 09:22:46.970192] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.345 [2024-06-11 09:22:46.970199] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.345 [2024-06-11 09:22:46.970204] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.345 [2024-06-11 09:22:46.970304] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.345 [2024-06-11 09:22:46.970457] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.345 [2024-06-11 09:22:46.970459] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.345 [2024-06-11 09:22:46.970333] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.917 [2024-06-11 09:22:47.691114] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:15.917 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:16.177 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:16.177 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.177 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:16.177 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:16.177 [2024-06-11 09:22:47.750513] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.177 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:16.177 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:16.177 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:16.177 09:22:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:20.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.527 09:23:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:34.527 09:23:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:34.527 09:23:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:34.527 09:23:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:34.527 09:23:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:34.527 09:23:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:34.527 09:23:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:34.527 09:23:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:34.527 rmmod nvme_tcp 00:08:34.527 rmmod nvme_fabrics 00:08:34.527 rmmod nvme_keyring 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 942401 ']' 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 942401 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 942401 ']' 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 942401 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 942401 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 942401' 00:08:34.527 killing process with pid 942401 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 942401 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 942401 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.527 09:23:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.073 09:23:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:37.073 00:08:37.073 real 0m28.734s 00:08:37.073 user 1m18.765s 00:08:37.073 sys 0m6.363s 00:08:37.073 09:23:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:37.073 09:23:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:37.074 ************************************ 00:08:37.074 END TEST nvmf_connect_disconnect 00:08:37.074 ************************************ 00:08:37.074 09:23:08 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:37.074 09:23:08 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:37.074 09:23:08 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:37.074 09:23:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:37.074 ************************************ 00:08:37.074 START TEST nvmf_multitarget 00:08:37.074 ************************************ 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:37.074 * Looking for test storage... 00:08:37.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:37.074 09:23:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.660 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:43.660 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:43.661 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:43.661 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:43.661 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.661 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:43.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:08:43.923 00:08:43.923 --- 10.0.0.2 ping statistics --- 00:08:43.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.923 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:08:43.923 00:08:43.923 --- 10.0.0.1 ping statistics --- 00:08:43.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.923 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=950680 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 950680 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 950680 ']' 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:43.923 09:23:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:44.184 [2024-06-11 09:23:15.758112] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:08:44.184 [2024-06-11 09:23:15.758184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.184 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.184 [2024-06-11 09:23:15.846491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.184 [2024-06-11 09:23:15.942517] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.184 [2024-06-11 09:23:15.942574] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.184 [2024-06-11 09:23:15.942583] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.184 [2024-06-11 09:23:15.942590] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.184 [2024-06-11 09:23:15.942596] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.185 [2024-06-11 09:23:15.942724] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.185 [2024-06-11 09:23:15.942852] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.185 [2024-06-11 09:23:15.943018] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.185 [2024-06-11 09:23:15.943020] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.128 09:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:45.128 09:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:08:45.128 09:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:45.128 09:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:45.128 09:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:45.128 09:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.128 09:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:45.128 09:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:45.128 09:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:45.128 09:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:45.128 09:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:45.128 "nvmf_tgt_1" 00:08:45.128 09:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:45.389 "nvmf_tgt_2" 00:08:45.389 09:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:45.389 09:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:45.389 09:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:45.389 09:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:45.389 true 00:08:45.389 09:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:45.650 true 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.650 rmmod nvme_tcp 00:08:45.650 rmmod nvme_fabrics 00:08:45.650 rmmod nvme_keyring 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 950680 ']' 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 950680 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 950680 ']' 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 950680 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:45.650 09:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 950680 00:08:45.911 09:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:45.911 09:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:45.911 09:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 950680' 00:08:45.911 killing process with pid 950680 00:08:45.911 09:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 950680 00:08:45.911 09:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 950680 00:08:45.911 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:45.911 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:45.911 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:45.911 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.911 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.911 09:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.911 09:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.911 09:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.515 09:23:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:48.515 00:08:48.515 real 0m11.327s 00:08:48.515 user 0m9.625s 00:08:48.515 sys 0m5.836s 00:08:48.515 09:23:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:48.515 09:23:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:48.515 ************************************ 00:08:48.515 END TEST nvmf_multitarget 00:08:48.515 ************************************ 00:08:48.515 09:23:19 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:48.515 09:23:19 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:48.515 09:23:19 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:48.515 09:23:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:48.515 ************************************ 00:08:48.515 START TEST nvmf_rpc 00:08:48.515 ************************************ 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:48.515 * Looking for test storage... 00:08:48.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:48.515 09:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.121 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.121 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:55.121 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:55.122 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:55.122 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:55.122 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:55.122 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:55.122 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.384 09:23:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:55.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:08:55.384 00:08:55.384 --- 10.0.0.2 ping statistics --- 00:08:55.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.384 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:08:55.384 00:08:55.384 --- 10.0.0.1 ping statistics --- 00:08:55.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.384 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=955362 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 955362 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 955362 ']' 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:55.384 09:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.384 [2024-06-11 09:23:27.132944] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:08:55.384 [2024-06-11 09:23:27.132992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.384 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.645 [2024-06-11 09:23:27.215637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:55.645 [2024-06-11 09:23:27.304549] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.645 [2024-06-11 09:23:27.304610] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.645 [2024-06-11 09:23:27.304618] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.645 [2024-06-11 09:23:27.304625] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.645 [2024-06-11 09:23:27.304631] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.645 [2024-06-11 09:23:27.304771] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.645 [2024-06-11 09:23:27.304902] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.645 [2024-06-11 09:23:27.305049] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.645 [2024-06-11 09:23:27.305051] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.218 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:56.218 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:56.218 09:23:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:56.218 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:56.218 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.478 09:23:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.478 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:56.478 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.478 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.478 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.478 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:56.478 "tick_rate": 2400000000, 00:08:56.478 "poll_groups": [ 00:08:56.478 { 00:08:56.478 "name": "nvmf_tgt_poll_group_000", 00:08:56.478 "admin_qpairs": 0, 00:08:56.478 "io_qpairs": 0, 00:08:56.478 "current_admin_qpairs": 0, 00:08:56.478 "current_io_qpairs": 0, 00:08:56.478 "pending_bdev_io": 0, 00:08:56.478 "completed_nvme_io": 0, 00:08:56.478 "transports": [] 00:08:56.478 }, 00:08:56.478 { 00:08:56.478 "name": "nvmf_tgt_poll_group_001", 00:08:56.478 "admin_qpairs": 0, 00:08:56.478 "io_qpairs": 0, 00:08:56.478 "current_admin_qpairs": 0, 00:08:56.478 "current_io_qpairs": 0, 00:08:56.478 "pending_bdev_io": 0, 00:08:56.478 "completed_nvme_io": 0, 00:08:56.478 "transports": [] 00:08:56.478 }, 00:08:56.478 { 00:08:56.478 "name": "nvmf_tgt_poll_group_002", 00:08:56.478 "admin_qpairs": 0, 00:08:56.478 "io_qpairs": 0, 00:08:56.478 "current_admin_qpairs": 0, 00:08:56.478 "current_io_qpairs": 0, 00:08:56.478 "pending_bdev_io": 0, 00:08:56.478 "completed_nvme_io": 0, 00:08:56.478 "transports": [] 00:08:56.478 }, 00:08:56.478 { 00:08:56.478 "name": "nvmf_tgt_poll_group_003", 00:08:56.478 "admin_qpairs": 0, 00:08:56.478 "io_qpairs": 0, 00:08:56.478 "current_admin_qpairs": 0, 00:08:56.479 "current_io_qpairs": 0, 00:08:56.479 "pending_bdev_io": 0, 00:08:56.479 "completed_nvme_io": 0, 00:08:56.479 "transports": [] 00:08:56.479 } 00:08:56.479 ] 00:08:56.479 }' 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.479 [2024-06-11 09:23:28.165548] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:56.479 "tick_rate": 2400000000, 00:08:56.479 "poll_groups": [ 00:08:56.479 { 00:08:56.479 "name": "nvmf_tgt_poll_group_000", 00:08:56.479 "admin_qpairs": 0, 00:08:56.479 "io_qpairs": 0, 00:08:56.479 "current_admin_qpairs": 0, 00:08:56.479 "current_io_qpairs": 0, 00:08:56.479 "pending_bdev_io": 0, 00:08:56.479 "completed_nvme_io": 0, 00:08:56.479 "transports": [ 00:08:56.479 { 00:08:56.479 "trtype": "TCP" 00:08:56.479 } 00:08:56.479 ] 00:08:56.479 }, 00:08:56.479 { 00:08:56.479 "name": "nvmf_tgt_poll_group_001", 00:08:56.479 "admin_qpairs": 0, 00:08:56.479 "io_qpairs": 0, 00:08:56.479 "current_admin_qpairs": 0, 00:08:56.479 "current_io_qpairs": 0, 00:08:56.479 "pending_bdev_io": 0, 00:08:56.479 "completed_nvme_io": 0, 00:08:56.479 "transports": [ 00:08:56.479 { 00:08:56.479 "trtype": "TCP" 00:08:56.479 } 00:08:56.479 ] 00:08:56.479 }, 00:08:56.479 { 00:08:56.479 "name": "nvmf_tgt_poll_group_002", 00:08:56.479 "admin_qpairs": 0, 00:08:56.479 "io_qpairs": 0, 00:08:56.479 "current_admin_qpairs": 0, 00:08:56.479 "current_io_qpairs": 0, 00:08:56.479 "pending_bdev_io": 0, 00:08:56.479 "completed_nvme_io": 0, 00:08:56.479 "transports": [ 00:08:56.479 { 00:08:56.479 "trtype": "TCP" 00:08:56.479 } 00:08:56.479 ] 00:08:56.479 }, 00:08:56.479 { 00:08:56.479 "name": "nvmf_tgt_poll_group_003", 00:08:56.479 "admin_qpairs": 0, 00:08:56.479 "io_qpairs": 0, 00:08:56.479 "current_admin_qpairs": 0, 00:08:56.479 "current_io_qpairs": 0, 00:08:56.479 "pending_bdev_io": 0, 00:08:56.479 "completed_nvme_io": 0, 00:08:56.479 "transports": [ 00:08:56.479 { 00:08:56.479 "trtype": "TCP" 00:08:56.479 } 00:08:56.479 ] 00:08:56.479 } 00:08:56.479 ] 00:08:56.479 }' 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:56.479 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.740 Malloc1 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.740 [2024-06-11 09:23:28.357995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:08:56.740 [2024-06-11 09:23:28.384746] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:08:56.740 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:56.740 could not add new controller: failed to write to nvme-fabrics device 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.740 09:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.126 09:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:58.126 09:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:08:58.126 09:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.126 09:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:58.126 09:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:00.042 09:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:00.303 09:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:00.303 09:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.303 09:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:00.303 09:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.303 09:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:00.303 09:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.303 09:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:00.303 09:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:00.303 09:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:00.303 09:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.303 09:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:00.303 09:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.303 [2024-06-11 09:23:32.040646] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:09:00.303 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:00.303 could not add new controller: failed to write to nvme-fabrics device 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:00.303 09:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.218 09:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:02.218 09:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:02.218 09:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.218 09:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:02.218 09:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:04.132 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:04.132 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:04.132 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:04.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.133 [2024-06-11 09:23:35.728826] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.133 09:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.517 09:23:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.517 09:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:05.517 09:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.517 09:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:05.517 09:23:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.060 [2024-06-11 09:23:39.427364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.060 09:23:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.444 09:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.444 09:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:09.444 09:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.444 09:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:09.444 09:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:11.369 09:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:11.369 09:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:11.369 09:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.369 09:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:11.369 09:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.369 09:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:11.369 09:23:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.369 [2024-06-11 09:23:43.126399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.369 09:23:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:13.280 09:23:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:13.280 09:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:13.280 09:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.280 09:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:13.280 09:23:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.194 [2024-06-11 09:23:46.841568] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.194 09:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:16.580 09:23:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.580 09:23:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:16.580 09:23:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.580 09:23:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:16.580 09:23:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:19.125 09:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.126 [2024-06-11 09:23:50.464178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.126 09:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:20.507 09:23:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:20.507 09:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:20.507 09:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.507 09:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:20.507 09:23:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:22.420 09:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:22.420 09:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:22.420 09:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.420 09:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:22.420 09:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.420 09:23:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:22.420 09:23:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.420 [2024-06-11 09:23:54.142934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.420 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.421 [2024-06-11 09:23:54.203055] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.421 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.681 [2024-06-11 09:23:54.267243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.681 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 [2024-06-11 09:23:54.327456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 [2024-06-11 09:23:54.387644] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:22.682 "tick_rate": 2400000000, 00:09:22.682 "poll_groups": [ 00:09:22.682 { 00:09:22.682 "name": "nvmf_tgt_poll_group_000", 00:09:22.682 "admin_qpairs": 0, 00:09:22.682 "io_qpairs": 224, 00:09:22.682 "current_admin_qpairs": 0, 00:09:22.682 "current_io_qpairs": 0, 00:09:22.682 "pending_bdev_io": 0, 00:09:22.682 "completed_nvme_io": 225, 00:09:22.682 "transports": [ 00:09:22.682 { 00:09:22.682 "trtype": "TCP" 00:09:22.682 } 00:09:22.682 ] 00:09:22.682 }, 00:09:22.682 { 00:09:22.682 "name": "nvmf_tgt_poll_group_001", 00:09:22.682 "admin_qpairs": 1, 00:09:22.682 "io_qpairs": 223, 00:09:22.682 "current_admin_qpairs": 0, 00:09:22.682 "current_io_qpairs": 0, 00:09:22.682 "pending_bdev_io": 0, 00:09:22.682 "completed_nvme_io": 226, 00:09:22.682 "transports": [ 00:09:22.682 { 00:09:22.682 "trtype": "TCP" 00:09:22.682 } 00:09:22.682 ] 00:09:22.682 }, 00:09:22.682 { 00:09:22.682 "name": "nvmf_tgt_poll_group_002", 00:09:22.682 "admin_qpairs": 6, 00:09:22.682 "io_qpairs": 218, 00:09:22.682 "current_admin_qpairs": 0, 00:09:22.682 "current_io_qpairs": 0, 00:09:22.682 "pending_bdev_io": 0, 00:09:22.682 "completed_nvme_io": 269, 00:09:22.682 "transports": [ 00:09:22.682 { 00:09:22.682 "trtype": "TCP" 00:09:22.682 } 00:09:22.682 ] 00:09:22.682 }, 00:09:22.682 { 00:09:22.682 "name": "nvmf_tgt_poll_group_003", 00:09:22.682 "admin_qpairs": 0, 00:09:22.682 "io_qpairs": 224, 00:09:22.682 "current_admin_qpairs": 0, 00:09:22.682 "current_io_qpairs": 0, 00:09:22.682 "pending_bdev_io": 0, 00:09:22.682 "completed_nvme_io": 519, 00:09:22.682 "transports": [ 00:09:22.682 { 00:09:22.682 "trtype": "TCP" 00:09:22.682 } 00:09:22.682 ] 00:09:22.682 } 00:09:22.682 ] 00:09:22.682 }' 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:22.682 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:22.942 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:22.942 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:22.942 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:22.943 rmmod nvme_tcp 00:09:22.943 rmmod nvme_fabrics 00:09:22.943 rmmod nvme_keyring 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 955362 ']' 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 955362 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 955362 ']' 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 955362 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 955362 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 955362' 00:09:22.943 killing process with pid 955362 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 955362 00:09:22.943 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 955362 00:09:23.203 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:23.203 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:23.203 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:23.203 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:23.203 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:23.203 09:23:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.203 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:23.203 09:23:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.118 09:23:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:25.118 00:09:25.118 real 0m37.084s 00:09:25.118 user 1m52.030s 00:09:25.118 sys 0m6.907s 00:09:25.118 09:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:25.118 09:23:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.118 ************************************ 00:09:25.118 END TEST nvmf_rpc 00:09:25.118 ************************************ 00:09:25.118 09:23:56 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:25.118 09:23:56 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:25.118 09:23:56 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:25.118 09:23:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:25.379 ************************************ 00:09:25.379 START TEST nvmf_invalid 00:09:25.379 ************************************ 00:09:25.379 09:23:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:25.379 * Looking for test storage... 00:09:25.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:25.379 09:23:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:31.965 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:31.966 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:31.966 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:31.966 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:31.966 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.966 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.227 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.227 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.227 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:32.227 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.227 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.227 09:24:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.227 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:32.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:09:32.227 00:09:32.227 --- 10.0.0.2 ping statistics --- 00:09:32.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.227 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:09:32.227 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:09:32.227 00:09:32.227 --- 10.0.0.1 ping statistics --- 00:09:32.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.227 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:09:32.227 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.227 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:32.227 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.227 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.227 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:32.227 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:32.227 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.227 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:32.228 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:32.489 09:24:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:32.489 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.489 09:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:32.489 09:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:32.489 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=965024 00:09:32.489 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 965024 00:09:32.489 09:24:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:32.489 09:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 965024 ']' 00:09:32.489 09:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.489 09:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:32.489 09:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.489 09:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:32.489 09:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:32.489 [2024-06-11 09:24:04.121155] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:09:32.489 [2024-06-11 09:24:04.121219] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.489 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.489 [2024-06-11 09:24:04.211610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.786 [2024-06-11 09:24:04.307007] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.786 [2024-06-11 09:24:04.307069] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.786 [2024-06-11 09:24:04.307084] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.786 [2024-06-11 09:24:04.307090] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.786 [2024-06-11 09:24:04.307096] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.786 [2024-06-11 09:24:04.307246] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.786 [2024-06-11 09:24:04.307391] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.786 [2024-06-11 09:24:04.307451] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.786 [2024-06-11 09:24:04.307453] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.356 09:24:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:33.356 09:24:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:09:33.356 09:24:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:33.356 09:24:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:33.356 09:24:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:33.356 09:24:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.356 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:33.356 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6907 00:09:33.617 [2024-06-11 09:24:05.230847] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:33.617 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:33.617 { 00:09:33.617 "nqn": "nqn.2016-06.io.spdk:cnode6907", 00:09:33.617 "tgt_name": "foobar", 00:09:33.617 "method": "nvmf_create_subsystem", 00:09:33.617 "req_id": 1 00:09:33.617 } 00:09:33.617 Got JSON-RPC error response 00:09:33.617 response: 00:09:33.617 { 00:09:33.617 "code": -32603, 00:09:33.617 "message": "Unable to find target foobar" 00:09:33.617 }' 00:09:33.617 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:33.617 { 00:09:33.617 "nqn": "nqn.2016-06.io.spdk:cnode6907", 00:09:33.617 "tgt_name": "foobar", 00:09:33.617 "method": "nvmf_create_subsystem", 00:09:33.617 "req_id": 1 00:09:33.617 } 00:09:33.617 Got JSON-RPC error response 00:09:33.617 response: 00:09:33.617 { 00:09:33.617 "code": -32603, 00:09:33.617 "message": "Unable to find target foobar" 00:09:33.617 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:33.617 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:33.617 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14614 00:09:33.878 [2024-06-11 09:24:05.455649] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14614: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:33.878 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:33.878 { 00:09:33.878 "nqn": "nqn.2016-06.io.spdk:cnode14614", 00:09:33.878 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:33.878 "method": "nvmf_create_subsystem", 00:09:33.878 "req_id": 1 00:09:33.878 } 00:09:33.879 Got JSON-RPC error response 00:09:33.879 response: 00:09:33.879 { 00:09:33.879 "code": -32602, 00:09:33.879 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:33.879 }' 00:09:33.879 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:33.879 { 00:09:33.879 "nqn": "nqn.2016-06.io.spdk:cnode14614", 00:09:33.879 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:33.879 "method": "nvmf_create_subsystem", 00:09:33.879 "req_id": 1 00:09:33.879 } 00:09:33.879 Got JSON-RPC error response 00:09:33.879 response: 00:09:33.879 { 00:09:33.879 "code": -32602, 00:09:33.879 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:33.879 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:33.879 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:33.879 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16536 00:09:33.879 [2024-06-11 09:24:05.672301] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16536: invalid model number 'SPDK_Controller' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:34.140 { 00:09:34.140 "nqn": "nqn.2016-06.io.spdk:cnode16536", 00:09:34.140 "model_number": "SPDK_Controller\u001f", 00:09:34.140 "method": "nvmf_create_subsystem", 00:09:34.140 "req_id": 1 00:09:34.140 } 00:09:34.140 Got JSON-RPC error response 00:09:34.140 response: 00:09:34.140 { 00:09:34.140 "code": -32602, 00:09:34.140 "message": "Invalid MN SPDK_Controller\u001f" 00:09:34.140 }' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:34.140 { 00:09:34.140 "nqn": "nqn.2016-06.io.spdk:cnode16536", 00:09:34.140 "model_number": "SPDK_Controller\u001f", 00:09:34.140 "method": "nvmf_create_subsystem", 00:09:34.140 "req_id": 1 00:09:34.140 } 00:09:34.140 Got JSON-RPC error response 00:09:34.140 response: 00:09:34.140 { 00:09:34.140 "code": -32602, 00:09:34.140 "message": "Invalid MN SPDK_Controller\u001f" 00:09:34.140 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.140 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:34.141 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:34.141 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:34.141 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.141 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.141 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ H == \- ]] 00:09:34.141 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'H6I[i~|2kZ*:_$cFQtj.S' 00:09:34.141 09:24:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'H6I[i~|2kZ*:_$cFQtj.S' nqn.2016-06.io.spdk:cnode30674 00:09:34.401 [2024-06-11 09:24:06.053567] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30674: invalid serial number 'H6I[i~|2kZ*:_$cFQtj.S' 00:09:34.401 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:34.401 { 00:09:34.401 "nqn": "nqn.2016-06.io.spdk:cnode30674", 00:09:34.401 "serial_number": "H6I[i~|2kZ*:_$cFQtj.S", 00:09:34.401 "method": "nvmf_create_subsystem", 00:09:34.401 "req_id": 1 00:09:34.401 } 00:09:34.401 Got JSON-RPC error response 00:09:34.401 response: 00:09:34.401 { 00:09:34.401 "code": -32602, 00:09:34.401 "message": "Invalid SN H6I[i~|2kZ*:_$cFQtj.S" 00:09:34.401 }' 00:09:34.401 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:34.401 { 00:09:34.401 "nqn": "nqn.2016-06.io.spdk:cnode30674", 00:09:34.401 "serial_number": "H6I[i~|2kZ*:_$cFQtj.S", 00:09:34.401 "method": "nvmf_create_subsystem", 00:09:34.401 "req_id": 1 00:09:34.401 } 00:09:34.401 Got JSON-RPC error response 00:09:34.401 response: 00:09:34.401 { 00:09:34.401 "code": -32602, 00:09:34.401 "message": "Invalid SN H6I[i~|2kZ*:_$cFQtj.S" 00:09:34.401 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:34.401 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:34.401 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:34.401 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:34.401 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.402 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:34.663 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ s == \- ]] 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'si^|5a{dW:GL]PVfjC~SE2fT|*5}EndzNI*<=0$!V' 00:09:34.664 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'si^|5a{dW:GL]PVfjC~SE2fT|*5}EndzNI*<=0$!V' nqn.2016-06.io.spdk:cnode4567 00:09:34.924 [2024-06-11 09:24:06.587338] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4567: invalid model number 'si^|5a{dW:GL]PVfjC~SE2fT|*5}EndzNI*<=0$!V' 00:09:34.924 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:34.924 { 00:09:34.924 "nqn": "nqn.2016-06.io.spdk:cnode4567", 00:09:34.924 "model_number": "si^|5a{dW:GL]PVfjC~SE2fT|*5}EndzNI*<=0$!V", 00:09:34.924 "method": "nvmf_create_subsystem", 00:09:34.924 "req_id": 1 00:09:34.924 } 00:09:34.924 Got JSON-RPC error response 00:09:34.924 response: 00:09:34.924 { 00:09:34.924 "code": -32602, 00:09:34.924 "message": "Invalid MN si^|5a{dW:GL]PVfjC~SE2fT|*5}EndzNI*<=0$!V" 00:09:34.924 }' 00:09:34.924 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:34.924 { 00:09:34.924 "nqn": "nqn.2016-06.io.spdk:cnode4567", 00:09:34.924 "model_number": "si^|5a{dW:GL]PVfjC~SE2fT|*5}EndzNI*<=0$!V", 00:09:34.924 "method": "nvmf_create_subsystem", 00:09:34.924 "req_id": 1 00:09:34.924 } 00:09:34.924 Got JSON-RPC error response 00:09:34.924 response: 00:09:34.924 { 00:09:34.924 "code": -32602, 00:09:34.924 "message": "Invalid MN si^|5a{dW:GL]PVfjC~SE2fT|*5}EndzNI*<=0$!V" 00:09:34.924 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:34.925 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:35.185 [2024-06-11 09:24:06.808116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.185 09:24:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:35.446 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:35.446 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:35.446 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:35.446 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:35.446 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:35.446 [2024-06-11 09:24:07.257636] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:35.706 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:35.706 { 00:09:35.706 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:35.706 "listen_address": { 00:09:35.706 "trtype": "tcp", 00:09:35.706 "traddr": "", 00:09:35.706 "trsvcid": "4421" 00:09:35.706 }, 00:09:35.706 "method": "nvmf_subsystem_remove_listener", 00:09:35.706 "req_id": 1 00:09:35.706 } 00:09:35.706 Got JSON-RPC error response 00:09:35.706 response: 00:09:35.706 { 00:09:35.706 "code": -32602, 00:09:35.706 "message": "Invalid parameters" 00:09:35.706 }' 00:09:35.706 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:35.706 { 00:09:35.706 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:35.706 "listen_address": { 00:09:35.706 "trtype": "tcp", 00:09:35.706 "traddr": "", 00:09:35.706 "trsvcid": "4421" 00:09:35.706 }, 00:09:35.706 "method": "nvmf_subsystem_remove_listener", 00:09:35.706 "req_id": 1 00:09:35.706 } 00:09:35.706 Got JSON-RPC error response 00:09:35.706 response: 00:09:35.706 { 00:09:35.706 "code": -32602, 00:09:35.706 "message": "Invalid parameters" 00:09:35.706 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:35.706 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12251 -i 0 00:09:35.706 [2024-06-11 09:24:07.478292] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12251: invalid cntlid range [0-65519] 00:09:35.706 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:35.706 { 00:09:35.706 "nqn": "nqn.2016-06.io.spdk:cnode12251", 00:09:35.706 "min_cntlid": 0, 00:09:35.706 "method": "nvmf_create_subsystem", 00:09:35.706 "req_id": 1 00:09:35.706 } 00:09:35.706 Got JSON-RPC error response 00:09:35.706 response: 00:09:35.706 { 00:09:35.706 "code": -32602, 00:09:35.706 "message": "Invalid cntlid range [0-65519]" 00:09:35.706 }' 00:09:35.706 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:35.706 { 00:09:35.706 "nqn": "nqn.2016-06.io.spdk:cnode12251", 00:09:35.706 "min_cntlid": 0, 00:09:35.706 "method": "nvmf_create_subsystem", 00:09:35.706 "req_id": 1 00:09:35.706 } 00:09:35.706 Got JSON-RPC error response 00:09:35.706 response: 00:09:35.706 { 00:09:35.706 "code": -32602, 00:09:35.706 "message": "Invalid cntlid range [0-65519]" 00:09:35.706 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:35.706 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29048 -i 65520 00:09:35.966 [2024-06-11 09:24:07.695013] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29048: invalid cntlid range [65520-65519] 00:09:35.966 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:35.966 { 00:09:35.966 "nqn": "nqn.2016-06.io.spdk:cnode29048", 00:09:35.966 "min_cntlid": 65520, 00:09:35.967 "method": "nvmf_create_subsystem", 00:09:35.967 "req_id": 1 00:09:35.967 } 00:09:35.967 Got JSON-RPC error response 00:09:35.967 response: 00:09:35.967 { 00:09:35.967 "code": -32602, 00:09:35.967 "message": "Invalid cntlid range [65520-65519]" 00:09:35.967 }' 00:09:35.967 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:35.967 { 00:09:35.967 "nqn": "nqn.2016-06.io.spdk:cnode29048", 00:09:35.967 "min_cntlid": 65520, 00:09:35.967 "method": "nvmf_create_subsystem", 00:09:35.967 "req_id": 1 00:09:35.967 } 00:09:35.967 Got JSON-RPC error response 00:09:35.967 response: 00:09:35.967 { 00:09:35.967 "code": -32602, 00:09:35.967 "message": "Invalid cntlid range [65520-65519]" 00:09:35.967 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:35.967 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5921 -I 0 00:09:36.227 [2024-06-11 09:24:07.915750] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5921: invalid cntlid range [1-0] 00:09:36.227 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:36.227 { 00:09:36.227 "nqn": "nqn.2016-06.io.spdk:cnode5921", 00:09:36.227 "max_cntlid": 0, 00:09:36.227 "method": "nvmf_create_subsystem", 00:09:36.227 "req_id": 1 00:09:36.227 } 00:09:36.227 Got JSON-RPC error response 00:09:36.227 response: 00:09:36.227 { 00:09:36.227 "code": -32602, 00:09:36.227 "message": "Invalid cntlid range [1-0]" 00:09:36.227 }' 00:09:36.227 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:36.227 { 00:09:36.227 "nqn": "nqn.2016-06.io.spdk:cnode5921", 00:09:36.227 "max_cntlid": 0, 00:09:36.227 "method": "nvmf_create_subsystem", 00:09:36.227 "req_id": 1 00:09:36.227 } 00:09:36.227 Got JSON-RPC error response 00:09:36.227 response: 00:09:36.227 { 00:09:36.227 "code": -32602, 00:09:36.227 "message": "Invalid cntlid range [1-0]" 00:09:36.227 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:36.227 09:24:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29082 -I 65520 00:09:36.487 [2024-06-11 09:24:08.132459] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29082: invalid cntlid range [1-65520] 00:09:36.487 09:24:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:36.487 { 00:09:36.487 "nqn": "nqn.2016-06.io.spdk:cnode29082", 00:09:36.487 "max_cntlid": 65520, 00:09:36.487 "method": "nvmf_create_subsystem", 00:09:36.487 "req_id": 1 00:09:36.487 } 00:09:36.487 Got JSON-RPC error response 00:09:36.487 response: 00:09:36.487 { 00:09:36.487 "code": -32602, 00:09:36.487 "message": "Invalid cntlid range [1-65520]" 00:09:36.487 }' 00:09:36.487 09:24:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:36.487 { 00:09:36.487 "nqn": "nqn.2016-06.io.spdk:cnode29082", 00:09:36.487 "max_cntlid": 65520, 00:09:36.487 "method": "nvmf_create_subsystem", 00:09:36.487 "req_id": 1 00:09:36.487 } 00:09:36.487 Got JSON-RPC error response 00:09:36.487 response: 00:09:36.487 { 00:09:36.487 "code": -32602, 00:09:36.487 "message": "Invalid cntlid range [1-65520]" 00:09:36.487 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:36.487 09:24:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29201 -i 6 -I 5 00:09:36.747 [2024-06-11 09:24:08.353204] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29201: invalid cntlid range [6-5] 00:09:36.747 09:24:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:36.747 { 00:09:36.747 "nqn": "nqn.2016-06.io.spdk:cnode29201", 00:09:36.747 "min_cntlid": 6, 00:09:36.747 "max_cntlid": 5, 00:09:36.747 "method": "nvmf_create_subsystem", 00:09:36.747 "req_id": 1 00:09:36.747 } 00:09:36.747 Got JSON-RPC error response 00:09:36.747 response: 00:09:36.747 { 00:09:36.747 "code": -32602, 00:09:36.747 "message": "Invalid cntlid range [6-5]" 00:09:36.747 }' 00:09:36.747 09:24:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:36.747 { 00:09:36.747 "nqn": "nqn.2016-06.io.spdk:cnode29201", 00:09:36.747 "min_cntlid": 6, 00:09:36.747 "max_cntlid": 5, 00:09:36.748 "method": "nvmf_create_subsystem", 00:09:36.748 "req_id": 1 00:09:36.748 } 00:09:36.748 Got JSON-RPC error response 00:09:36.748 response: 00:09:36.748 { 00:09:36.748 "code": -32602, 00:09:36.748 "message": "Invalid cntlid range [6-5]" 00:09:36.748 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:36.748 09:24:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:36.748 09:24:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:36.748 { 00:09:36.748 "name": "foobar", 00:09:36.748 "method": "nvmf_delete_target", 00:09:36.748 "req_id": 1 00:09:36.748 } 00:09:36.748 Got JSON-RPC error response 00:09:36.748 response: 00:09:36.748 { 00:09:36.748 "code": -32602, 00:09:36.748 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:36.748 }' 00:09:36.748 09:24:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:36.748 { 00:09:36.748 "name": "foobar", 00:09:36.748 "method": "nvmf_delete_target", 00:09:36.748 "req_id": 1 00:09:36.748 } 00:09:36.748 Got JSON-RPC error response 00:09:36.748 response: 00:09:36.748 { 00:09:36.748 "code": -32602, 00:09:36.748 "message": "The specified target doesn't exist, cannot delete it." 00:09:36.748 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:36.748 09:24:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:36.748 09:24:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:36.748 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:36.748 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:36.748 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:36.748 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:36.748 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:36.748 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:36.748 rmmod nvme_tcp 00:09:36.748 rmmod nvme_fabrics 00:09:36.748 rmmod nvme_keyring 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 965024 ']' 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 965024 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 965024 ']' 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 965024 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 965024 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 965024' 00:09:37.008 killing process with pid 965024 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 965024 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 965024 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:37.008 09:24:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.553 09:24:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:39.553 00:09:39.553 real 0m13.880s 00:09:39.553 user 0m22.538s 00:09:39.553 sys 0m6.258s 00:09:39.553 09:24:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:39.553 09:24:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:39.553 ************************************ 00:09:39.553 END TEST nvmf_invalid 00:09:39.553 ************************************ 00:09:39.553 09:24:10 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:39.553 09:24:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:39.553 09:24:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:39.553 09:24:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:39.553 ************************************ 00:09:39.553 START TEST nvmf_abort 00:09:39.553 ************************************ 00:09:39.553 09:24:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:39.553 * Looking for test storage... 00:09:39.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:39.553 09:24:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:39.554 09:24:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.141 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:46.142 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:46.142 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:46.142 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:46.142 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.142 09:24:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.404 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.404 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.404 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:46.404 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.404 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.404 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.404 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:46.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:09:46.404 00:09:46.404 --- 10.0.0.2 ping statistics --- 00:09:46.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.404 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:09:46.404 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:09:46.665 00:09:46.665 --- 10.0.0.1 ping statistics --- 00:09:46.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.665 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=970809 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 970809 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 970809 ']' 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:46.665 09:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.665 [2024-06-11 09:24:18.314066] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:09:46.665 [2024-06-11 09:24:18.314130] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.665 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.665 [2024-06-11 09:24:18.384539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:46.665 [2024-06-11 09:24:18.458990] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.665 [2024-06-11 09:24:18.459026] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.665 [2024-06-11 09:24:18.459033] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.665 [2024-06-11 09:24:18.459040] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.665 [2024-06-11 09:24:18.459046] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.665 [2024-06-11 09:24:18.459154] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.665 [2024-06-11 09:24:18.459320] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.665 [2024-06-11 09:24:18.459371] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.606 [2024-06-11 09:24:19.243043] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.606 Malloc0 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.606 Delay0 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.606 [2024-06-11 09:24:19.326739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:47.606 09:24:19 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:47.606 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.866 [2024-06-11 09:24:19.457044] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:50.409 Initializing NVMe Controllers 00:09:50.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:50.409 controller IO queue size 128 less than required 00:09:50.409 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:50.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:50.409 Initialization complete. Launching workers. 00:09:50.409 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 33147 00:09:50.409 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33210, failed to submit 62 00:09:50.409 success 33151, unsuccess 59, failed 0 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:50.409 rmmod nvme_tcp 00:09:50.409 rmmod nvme_fabrics 00:09:50.409 rmmod nvme_keyring 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 970809 ']' 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 970809 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 970809 ']' 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 970809 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 970809 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 970809' 00:09:50.409 killing process with pid 970809 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 970809 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 970809 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.409 09:24:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.321 09:24:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:52.321 00:09:52.321 real 0m13.092s 00:09:52.321 user 0m14.348s 00:09:52.321 sys 0m6.234s 00:09:52.321 09:24:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:52.321 09:24:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.321 ************************************ 00:09:52.321 END TEST nvmf_abort 00:09:52.321 ************************************ 00:09:52.321 09:24:24 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:52.321 09:24:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:52.321 09:24:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:52.321 09:24:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.321 ************************************ 00:09:52.321 START TEST nvmf_ns_hotplug_stress 00:09:52.321 ************************************ 00:09:52.321 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:52.581 * Looking for test storage... 00:09:52.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:52.581 09:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:00.788 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.788 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:00.789 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:00.789 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:00.789 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:00.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:10:00.789 00:10:00.789 --- 10.0.0.2 ping statistics --- 00:10:00.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.789 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:10:00.789 00:10:00.789 --- 10.0.0.1 ping statistics --- 00:10:00.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.789 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=975679 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 975679 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 975679 ']' 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.789 09:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:00.789 [2024-06-11 09:24:31.510597] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:10:00.789 [2024-06-11 09:24:31.510665] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.789 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.789 [2024-06-11 09:24:31.580180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.789 [2024-06-11 09:24:31.656110] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.789 [2024-06-11 09:24:31.656147] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.789 [2024-06-11 09:24:31.656154] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.789 [2024-06-11 09:24:31.656160] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.789 [2024-06-11 09:24:31.656166] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.789 [2024-06-11 09:24:31.656274] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.789 [2024-06-11 09:24:31.656433] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.789 [2024-06-11 09:24:31.656525] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.789 09:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:00.789 09:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:10:00.789 09:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:00.789 09:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:00.789 09:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.789 09:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.789 09:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:00.789 09:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:01.050 [2024-06-11 09:24:32.612556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.050 09:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:01.050 09:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.309 [2024-06-11 09:24:33.002093] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.309 09:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:01.568 09:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:01.828 Malloc0 00:10:01.828 09:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:01.828 Delay0 00:10:02.088 09:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.088 09:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:02.347 NULL1 00:10:02.347 09:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:02.606 09:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=976358 00:10:02.606 09:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:02.606 09:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:02.606 09:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.606 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.985 Read completed with error (sct=0, sc=11) 00:10:03.985 09:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.985 09:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:03.985 09:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:04.244 true 00:10:04.244 09:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:04.244 09:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.185 09:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.185 09:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:05.185 09:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:05.445 true 00:10:05.445 09:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:05.445 09:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.705 09:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.705 09:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:05.706 09:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:05.965 true 00:10:05.965 09:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:05.965 09:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.165 09:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.165 09:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:07.165 09:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:07.424 true 00:10:07.424 09:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:07.424 09:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.364 09:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.364 09:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:08.364 09:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:08.623 true 00:10:08.623 09:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:08.623 09:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.884 09:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.144 09:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:09.144 09:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:09.404 true 00:10:09.404 09:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:09.404 09:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.343 09:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.603 09:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:10.603 09:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:10.603 true 00:10:10.603 09:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:10.603 09:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.863 09:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.123 09:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:11.123 09:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:11.383 true 00:10:11.383 09:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:11.383 09:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.321 09:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.581 09:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:12.581 09:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:12.841 true 00:10:12.841 09:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:12.841 09:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.841 09:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.101 09:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:13.101 09:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:13.363 true 00:10:13.363 09:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:13.363 09:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.813 09:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.813 09:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:14.813 09:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:15.074 true 00:10:15.074 09:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:15.074 09:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.012 09:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.012 09:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:16.012 09:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:16.271 true 00:10:16.271 09:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:16.271 09:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.531 09:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.531 09:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:16.531 09:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:16.790 true 00:10:16.790 09:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:16.790 09:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.730 09:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.990 09:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:17.990 09:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:18.249 true 00:10:18.249 09:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:18.249 09:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.508 09:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.768 09:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:18.768 09:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:18.768 true 00:10:19.028 09:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:19.028 09:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.968 09:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.968 09:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:19.968 09:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:20.228 true 00:10:20.228 09:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:20.228 09:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.488 09:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.748 09:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:20.748 09:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:21.008 true 00:10:21.008 09:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:21.008 09:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.949 09:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.209 09:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:22.209 09:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:22.470 true 00:10:22.470 09:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:22.470 09:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.470 09:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.730 09:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:22.730 09:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:22.989 true 00:10:22.989 09:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:22.989 09:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.371 09:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.371 09:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:24.372 09:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:24.632 true 00:10:24.632 09:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:24.632 09:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.571 09:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.571 09:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:25.571 09:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:25.832 true 00:10:25.832 09:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:25.832 09:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.092 09:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.352 09:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:26.352 09:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:26.352 true 00:10:26.613 09:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:26.613 09:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.553 09:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.814 09:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:27.814 09:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:28.074 true 00:10:28.074 09:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:28.074 09:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.678 09:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.938 09:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:28.938 09:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:29.199 true 00:10:29.199 09:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:29.199 09:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.460 09:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.721 09:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:29.721 09:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:29.721 true 00:10:29.721 09:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:29.721 09:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.104 09:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.104 09:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:31.104 09:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:31.365 true 00:10:31.365 09:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:31.365 09:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.306 09:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.306 09:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:32.306 09:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:32.567 true 00:10:32.567 09:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:32.567 09:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.828 09:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.828 Initializing NVMe Controllers 00:10:32.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:32.828 Controller IO queue size 128, less than required. 00:10:32.828 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:32.828 Controller IO queue size 128, less than required. 00:10:32.828 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:32.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:32.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:32.828 Initialization complete. Launching workers. 00:10:32.828 ======================================================== 00:10:32.828 Latency(us) 00:10:32.828 Device Information : IOPS MiB/s Average min max 00:10:32.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1743.45 0.85 47976.83 2464.81 1107095.13 00:10:32.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 19581.00 9.56 6536.60 1709.30 504112.55 00:10:32.828 ======================================================== 00:10:32.828 Total : 21324.45 10.41 9924.69 1709.30 1107095.13 00:10:32.828 00:10:33.089 09:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:33.089 09:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:33.089 true 00:10:33.089 09:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 976358 00:10:33.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (976358) - No such process 00:10:33.089 09:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 976358 00:10:33.089 09:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.349 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:33.610 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:33.610 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:33.610 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:33.610 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.610 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:33.871 null0 00:10:33.871 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:33.871 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:33.871 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:34.131 null1 00:10:34.131 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.132 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.132 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:34.132 null2 00:10:34.132 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.132 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.132 09:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:34.392 null3 00:10:34.392 09:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.392 09:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.392 09:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:34.652 null4 00:10:34.652 09:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.652 09:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.652 09:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:34.913 null5 00:10:34.913 09:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:34.913 09:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:34.913 09:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:35.173 null6 00:10:35.173 09:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:35.174 09:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:35.174 09:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:35.174 null7 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.434 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 982821 982822 982824 982826 982828 982830 982832 982834 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:35.435 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.696 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:35.696 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.696 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.696 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:35.697 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:35.958 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.958 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:35.958 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.958 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:35.958 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:35.958 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:35.958 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:35.958 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.218 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.219 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.219 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.219 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.219 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.219 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.219 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.219 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.219 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.219 09:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.219 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.478 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.479 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:36.479 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:36.479 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.479 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:36.479 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.479 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.479 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.479 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.479 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:36.738 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.004 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.264 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.264 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.264 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.264 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.264 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.264 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.264 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.264 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.264 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.264 09:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.264 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.264 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.264 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.264 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.264 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.264 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.525 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.786 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.786 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.786 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.786 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.786 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.786 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.786 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.787 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.787 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.787 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.787 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.787 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.787 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.787 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.787 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.787 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.787 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.787 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.048 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.048 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.048 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.048 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.048 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.048 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.048 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.048 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.048 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.048 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.048 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.048 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.049 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.049 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.049 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.049 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.049 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.049 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:38.049 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.049 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.049 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.049 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.049 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.313 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.313 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.313 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.313 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.313 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.313 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.313 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.313 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.313 09:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.313 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.313 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.313 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.313 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.313 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.313 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.313 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.313 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.313 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.313 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.313 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.574 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.836 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.098 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:39.359 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.359 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.359 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:39.359 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:39.359 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:39.359 09:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:39.359 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.359 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.359 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.359 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.359 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:39.359 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.359 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.359 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:39.359 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.359 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.359 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.359 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.620 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.620 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.620 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.620 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.620 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:39.620 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:39.620 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:39.620 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:39.620 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:39.620 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:39.620 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:39.620 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:39.620 rmmod nvme_tcp 00:10:39.620 rmmod nvme_fabrics 00:10:39.620 rmmod nvme_keyring 00:10:39.880 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:39.880 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:39.880 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:39.880 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 975679 ']' 00:10:39.880 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 975679 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 975679 ']' 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 975679 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 975679 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 975679' 00:10:39.881 killing process with pid 975679 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 975679 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 975679 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.881 09:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.425 09:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:42.425 00:10:42.425 real 0m49.631s 00:10:42.425 user 3m17.068s 00:10:42.425 sys 0m15.287s 00:10:42.425 09:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:42.425 09:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:42.425 ************************************ 00:10:42.425 END TEST nvmf_ns_hotplug_stress 00:10:42.425 ************************************ 00:10:42.425 09:25:13 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:42.425 09:25:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:42.425 09:25:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:42.425 09:25:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:42.425 ************************************ 00:10:42.425 START TEST nvmf_connect_stress 00:10:42.425 ************************************ 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:42.425 * Looking for test storage... 00:10:42.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.425 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:42.426 09:25:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.026 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:49.027 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:49.027 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:49.027 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:49.027 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:49.027 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:49.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:10:49.288 00:10:49.288 --- 10.0.0.2 ping statistics --- 00:10:49.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.288 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:10:49.288 00:10:49.288 --- 10.0.0.1 ping statistics --- 00:10:49.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.288 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=987989 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 987989 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 987989 ']' 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:49.288 09:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.288 [2024-06-11 09:25:21.037270] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:10:49.288 [2024-06-11 09:25:21.037321] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.288 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.288 [2024-06-11 09:25:21.102337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:49.548 [2024-06-11 09:25:21.167292] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.548 [2024-06-11 09:25:21.167333] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.548 [2024-06-11 09:25:21.167341] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.548 [2024-06-11 09:25:21.167347] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.548 [2024-06-11 09:25:21.167353] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.548 [2024-06-11 09:25:21.167452] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.548 [2024-06-11 09:25:21.167707] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.548 [2024-06-11 09:25:21.167707] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.548 [2024-06-11 09:25:21.301234] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.548 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.549 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.549 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.549 [2024-06-11 09:25:21.344505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.549 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.549 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:49.549 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.549 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.549 NULL1 00:10:49.549 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.549 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=988015 00:10:49.549 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.809 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.809 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.810 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.070 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.070 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:50.070 09:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.070 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.070 09:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.330 09:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.330 09:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:50.330 09:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.330 09:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.330 09:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.900 09:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.900 09:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:50.900 09:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.900 09:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.900 09:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.160 09:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.160 09:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:51.160 09:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.160 09:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.160 09:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.420 09:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.420 09:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:51.420 09:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.420 09:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.420 09:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.681 09:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.681 09:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:51.681 09:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.681 09:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.681 09:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.941 09:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.941 09:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:51.941 09:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.941 09:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.941 09:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.513 09:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.513 09:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:52.513 09:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.513 09:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.513 09:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.773 09:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.773 09:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:52.773 09:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.773 09:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.774 09:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.034 09:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.034 09:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:53.034 09:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.034 09:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:53.034 09:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.295 09:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.295 09:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:53.295 09:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.295 09:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:53.295 09:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.556 09:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.556 09:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:53.556 09:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.556 09:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:53.556 09:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.128 09:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.128 09:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:54.128 09:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.128 09:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.128 09:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.389 09:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.389 09:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:54.389 09:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.389 09:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.389 09:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.649 09:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.649 09:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:54.649 09:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.649 09:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.649 09:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.911 09:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.911 09:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:54.911 09:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.911 09:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.911 09:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.172 09:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:55.172 09:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:55.172 09:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.172 09:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:55.172 09:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.744 09:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:55.744 09:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:55.744 09:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.744 09:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:55.744 09:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.005 09:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:56.005 09:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:56.005 09:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.005 09:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:56.005 09:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.265 09:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:56.265 09:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:56.265 09:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.265 09:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:56.265 09:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.525 09:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:56.525 09:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:56.525 09:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.525 09:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:56.525 09:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.096 09:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.096 09:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:57.096 09:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.096 09:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.096 09:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.357 09:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.357 09:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:57.357 09:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.357 09:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.357 09:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.618 09:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.618 09:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:57.618 09:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.618 09:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.618 09:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.879 09:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:57.879 09:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:57.879 09:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.879 09:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:57.879 09:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.140 09:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.140 09:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:58.140 09:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.140 09:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.140 09:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.712 09:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.712 09:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:58.712 09:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.712 09:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.712 09:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.973 09:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:58.973 09:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:58.973 09:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.973 09:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.973 09:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.234 09:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:59.234 09:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:59.234 09:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.234 09:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:59.234 09:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.497 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:59.497 09:25:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:59.497 09:25:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.497 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:59.497 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.758 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:59.758 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:59.758 09:25:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 988015 00:10:59.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (988015) - No such process 00:10:59.758 09:25:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 988015 00:10:59.758 09:25:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:59.758 09:25:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:59.758 09:25:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:59.758 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:59.758 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:59.758 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:59.758 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:59.758 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.758 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.758 rmmod nvme_tcp 00:10:59.758 rmmod nvme_fabrics 00:11:00.018 rmmod nvme_keyring 00:11:00.018 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:00.018 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:00.018 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:00.018 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 987989 ']' 00:11:00.018 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 987989 00:11:00.018 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 987989 ']' 00:11:00.018 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 987989 00:11:00.018 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:11:00.018 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:00.019 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 987989 00:11:00.019 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:11:00.019 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:11:00.019 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 987989' 00:11:00.019 killing process with pid 987989 00:11:00.019 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 987989 00:11:00.019 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 987989 00:11:00.019 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:00.019 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:00.019 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:00.019 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:00.019 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:00.019 09:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.019 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:00.019 09:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.613 09:25:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:02.613 00:11:02.613 real 0m20.096s 00:11:02.613 user 0m40.270s 00:11:02.613 sys 0m8.467s 00:11:02.613 09:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:02.613 09:25:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.613 ************************************ 00:11:02.613 END TEST nvmf_connect_stress 00:11:02.613 ************************************ 00:11:02.614 09:25:33 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:02.614 09:25:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:02.614 09:25:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:02.614 09:25:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:02.614 ************************************ 00:11:02.614 START TEST nvmf_fused_ordering 00:11:02.614 ************************************ 00:11:02.614 09:25:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:02.614 * Looking for test storage... 00:11:02.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.614 09:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:09.206 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:09.207 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:09.207 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:09.207 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:09.207 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:09.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:11:09.207 00:11:09.207 --- 10.0.0.2 ping statistics --- 00:11:09.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.207 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:11:09.207 00:11:09.207 --- 10.0.0.1 ping statistics --- 00:11:09.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.207 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=994184 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 994184 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 994184 ']' 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:09.207 09:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:09.207 [2024-06-11 09:25:40.719845] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:11:09.207 [2024-06-11 09:25:40.719913] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.207 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.207 [2024-06-11 09:25:40.792934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.207 [2024-06-11 09:25:40.866209] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.207 [2024-06-11 09:25:40.866251] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.207 [2024-06-11 09:25:40.866258] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.207 [2024-06-11 09:25:40.866266] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.207 [2024-06-11 09:25:40.866271] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.207 [2024-06-11 09:25:40.866299] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.778 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:09.778 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:11:09.778 09:25:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.778 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:09.778 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.039 [2024-06-11 09:25:41.617014] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.039 [2024-06-11 09:25:41.633152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.039 NULL1 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:10.039 09:25:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:10.039 [2024-06-11 09:25:41.689404] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:11:10.039 [2024-06-11 09:25:41.689466] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994385 ] 00:11:10.039 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.610 Attached to nqn.2016-06.io.spdk:cnode1 00:11:10.610 Namespace ID: 1 size: 1GB 00:11:10.610 fused_ordering(0) 00:11:10.610 fused_ordering(1) 00:11:10.610 fused_ordering(2) 00:11:10.610 fused_ordering(3) 00:11:10.610 fused_ordering(4) 00:11:10.610 fused_ordering(5) 00:11:10.610 fused_ordering(6) 00:11:10.610 fused_ordering(7) 00:11:10.610 fused_ordering(8) 00:11:10.610 fused_ordering(9) 00:11:10.610 fused_ordering(10) 00:11:10.610 fused_ordering(11) 00:11:10.610 fused_ordering(12) 00:11:10.610 fused_ordering(13) 00:11:10.610 fused_ordering(14) 00:11:10.610 fused_ordering(15) 00:11:10.610 fused_ordering(16) 00:11:10.610 fused_ordering(17) 00:11:10.610 fused_ordering(18) 00:11:10.610 fused_ordering(19) 00:11:10.610 fused_ordering(20) 00:11:10.610 fused_ordering(21) 00:11:10.610 fused_ordering(22) 00:11:10.610 fused_ordering(23) 00:11:10.610 fused_ordering(24) 00:11:10.610 fused_ordering(25) 00:11:10.610 fused_ordering(26) 00:11:10.610 fused_ordering(27) 00:11:10.610 fused_ordering(28) 00:11:10.610 fused_ordering(29) 00:11:10.610 fused_ordering(30) 00:11:10.610 fused_ordering(31) 00:11:10.610 fused_ordering(32) 00:11:10.610 fused_ordering(33) 00:11:10.610 fused_ordering(34) 00:11:10.610 fused_ordering(35) 00:11:10.610 fused_ordering(36) 00:11:10.610 fused_ordering(37) 00:11:10.610 fused_ordering(38) 00:11:10.610 fused_ordering(39) 00:11:10.610 fused_ordering(40) 00:11:10.610 fused_ordering(41) 00:11:10.610 fused_ordering(42) 00:11:10.610 fused_ordering(43) 00:11:10.610 fused_ordering(44) 00:11:10.610 fused_ordering(45) 00:11:10.610 fused_ordering(46) 00:11:10.610 fused_ordering(47) 00:11:10.610 fused_ordering(48) 00:11:10.610 fused_ordering(49) 00:11:10.610 fused_ordering(50) 00:11:10.610 fused_ordering(51) 00:11:10.610 fused_ordering(52) 00:11:10.610 fused_ordering(53) 00:11:10.610 fused_ordering(54) 00:11:10.610 fused_ordering(55) 00:11:10.610 fused_ordering(56) 00:11:10.610 fused_ordering(57) 00:11:10.610 fused_ordering(58) 00:11:10.610 fused_ordering(59) 00:11:10.610 fused_ordering(60) 00:11:10.610 fused_ordering(61) 00:11:10.610 fused_ordering(62) 00:11:10.610 fused_ordering(63) 00:11:10.610 fused_ordering(64) 00:11:10.610 fused_ordering(65) 00:11:10.610 fused_ordering(66) 00:11:10.610 fused_ordering(67) 00:11:10.610 fused_ordering(68) 00:11:10.610 fused_ordering(69) 00:11:10.610 fused_ordering(70) 00:11:10.610 fused_ordering(71) 00:11:10.610 fused_ordering(72) 00:11:10.610 fused_ordering(73) 00:11:10.610 fused_ordering(74) 00:11:10.610 fused_ordering(75) 00:11:10.610 fused_ordering(76) 00:11:10.610 fused_ordering(77) 00:11:10.610 fused_ordering(78) 00:11:10.610 fused_ordering(79) 00:11:10.610 fused_ordering(80) 00:11:10.610 fused_ordering(81) 00:11:10.610 fused_ordering(82) 00:11:10.610 fused_ordering(83) 00:11:10.610 fused_ordering(84) 00:11:10.610 fused_ordering(85) 00:11:10.610 fused_ordering(86) 00:11:10.610 fused_ordering(87) 00:11:10.610 fused_ordering(88) 00:11:10.610 fused_ordering(89) 00:11:10.610 fused_ordering(90) 00:11:10.610 fused_ordering(91) 00:11:10.610 fused_ordering(92) 00:11:10.610 fused_ordering(93) 00:11:10.610 fused_ordering(94) 00:11:10.610 fused_ordering(95) 00:11:10.610 fused_ordering(96) 00:11:10.610 fused_ordering(97) 00:11:10.610 fused_ordering(98) 00:11:10.610 fused_ordering(99) 00:11:10.610 fused_ordering(100) 00:11:10.611 fused_ordering(101) 00:11:10.611 fused_ordering(102) 00:11:10.611 fused_ordering(103) 00:11:10.611 fused_ordering(104) 00:11:10.611 fused_ordering(105) 00:11:10.611 fused_ordering(106) 00:11:10.611 fused_ordering(107) 00:11:10.611 fused_ordering(108) 00:11:10.611 fused_ordering(109) 00:11:10.611 fused_ordering(110) 00:11:10.611 fused_ordering(111) 00:11:10.611 fused_ordering(112) 00:11:10.611 fused_ordering(113) 00:11:10.611 fused_ordering(114) 00:11:10.611 fused_ordering(115) 00:11:10.611 fused_ordering(116) 00:11:10.611 fused_ordering(117) 00:11:10.611 fused_ordering(118) 00:11:10.611 fused_ordering(119) 00:11:10.611 fused_ordering(120) 00:11:10.611 fused_ordering(121) 00:11:10.611 fused_ordering(122) 00:11:10.611 fused_ordering(123) 00:11:10.611 fused_ordering(124) 00:11:10.611 fused_ordering(125) 00:11:10.611 fused_ordering(126) 00:11:10.611 fused_ordering(127) 00:11:10.611 fused_ordering(128) 00:11:10.611 fused_ordering(129) 00:11:10.611 fused_ordering(130) 00:11:10.611 fused_ordering(131) 00:11:10.611 fused_ordering(132) 00:11:10.611 fused_ordering(133) 00:11:10.611 fused_ordering(134) 00:11:10.611 fused_ordering(135) 00:11:10.611 fused_ordering(136) 00:11:10.611 fused_ordering(137) 00:11:10.611 fused_ordering(138) 00:11:10.611 fused_ordering(139) 00:11:10.611 fused_ordering(140) 00:11:10.611 fused_ordering(141) 00:11:10.611 fused_ordering(142) 00:11:10.611 fused_ordering(143) 00:11:10.611 fused_ordering(144) 00:11:10.611 fused_ordering(145) 00:11:10.611 fused_ordering(146) 00:11:10.611 fused_ordering(147) 00:11:10.611 fused_ordering(148) 00:11:10.611 fused_ordering(149) 00:11:10.611 fused_ordering(150) 00:11:10.611 fused_ordering(151) 00:11:10.611 fused_ordering(152) 00:11:10.611 fused_ordering(153) 00:11:10.611 fused_ordering(154) 00:11:10.611 fused_ordering(155) 00:11:10.611 fused_ordering(156) 00:11:10.611 fused_ordering(157) 00:11:10.611 fused_ordering(158) 00:11:10.611 fused_ordering(159) 00:11:10.611 fused_ordering(160) 00:11:10.611 fused_ordering(161) 00:11:10.611 fused_ordering(162) 00:11:10.611 fused_ordering(163) 00:11:10.611 fused_ordering(164) 00:11:10.611 fused_ordering(165) 00:11:10.611 fused_ordering(166) 00:11:10.611 fused_ordering(167) 00:11:10.611 fused_ordering(168) 00:11:10.611 fused_ordering(169) 00:11:10.611 fused_ordering(170) 00:11:10.611 fused_ordering(171) 00:11:10.611 fused_ordering(172) 00:11:10.611 fused_ordering(173) 00:11:10.611 fused_ordering(174) 00:11:10.611 fused_ordering(175) 00:11:10.611 fused_ordering(176) 00:11:10.611 fused_ordering(177) 00:11:10.611 fused_ordering(178) 00:11:10.611 fused_ordering(179) 00:11:10.611 fused_ordering(180) 00:11:10.611 fused_ordering(181) 00:11:10.611 fused_ordering(182) 00:11:10.611 fused_ordering(183) 00:11:10.611 fused_ordering(184) 00:11:10.611 fused_ordering(185) 00:11:10.611 fused_ordering(186) 00:11:10.611 fused_ordering(187) 00:11:10.611 fused_ordering(188) 00:11:10.611 fused_ordering(189) 00:11:10.611 fused_ordering(190) 00:11:10.611 fused_ordering(191) 00:11:10.611 fused_ordering(192) 00:11:10.611 fused_ordering(193) 00:11:10.611 fused_ordering(194) 00:11:10.611 fused_ordering(195) 00:11:10.611 fused_ordering(196) 00:11:10.611 fused_ordering(197) 00:11:10.611 fused_ordering(198) 00:11:10.611 fused_ordering(199) 00:11:10.611 fused_ordering(200) 00:11:10.611 fused_ordering(201) 00:11:10.611 fused_ordering(202) 00:11:10.611 fused_ordering(203) 00:11:10.611 fused_ordering(204) 00:11:10.611 fused_ordering(205) 00:11:10.872 fused_ordering(206) 00:11:10.872 fused_ordering(207) 00:11:10.872 fused_ordering(208) 00:11:10.872 fused_ordering(209) 00:11:10.872 fused_ordering(210) 00:11:10.872 fused_ordering(211) 00:11:10.872 fused_ordering(212) 00:11:10.872 fused_ordering(213) 00:11:10.872 fused_ordering(214) 00:11:10.872 fused_ordering(215) 00:11:10.872 fused_ordering(216) 00:11:10.872 fused_ordering(217) 00:11:10.872 fused_ordering(218) 00:11:10.872 fused_ordering(219) 00:11:10.872 fused_ordering(220) 00:11:10.872 fused_ordering(221) 00:11:10.872 fused_ordering(222) 00:11:10.872 fused_ordering(223) 00:11:10.872 fused_ordering(224) 00:11:10.872 fused_ordering(225) 00:11:10.872 fused_ordering(226) 00:11:10.872 fused_ordering(227) 00:11:10.872 fused_ordering(228) 00:11:10.872 fused_ordering(229) 00:11:10.872 fused_ordering(230) 00:11:10.872 fused_ordering(231) 00:11:10.872 fused_ordering(232) 00:11:10.872 fused_ordering(233) 00:11:10.872 fused_ordering(234) 00:11:10.872 fused_ordering(235) 00:11:10.872 fused_ordering(236) 00:11:10.872 fused_ordering(237) 00:11:10.872 fused_ordering(238) 00:11:10.872 fused_ordering(239) 00:11:10.872 fused_ordering(240) 00:11:10.872 fused_ordering(241) 00:11:10.872 fused_ordering(242) 00:11:10.872 fused_ordering(243) 00:11:10.872 fused_ordering(244) 00:11:10.872 fused_ordering(245) 00:11:10.872 fused_ordering(246) 00:11:10.872 fused_ordering(247) 00:11:10.872 fused_ordering(248) 00:11:10.872 fused_ordering(249) 00:11:10.872 fused_ordering(250) 00:11:10.872 fused_ordering(251) 00:11:10.872 fused_ordering(252) 00:11:10.872 fused_ordering(253) 00:11:10.872 fused_ordering(254) 00:11:10.872 fused_ordering(255) 00:11:10.872 fused_ordering(256) 00:11:10.872 fused_ordering(257) 00:11:10.872 fused_ordering(258) 00:11:10.872 fused_ordering(259) 00:11:10.872 fused_ordering(260) 00:11:10.872 fused_ordering(261) 00:11:10.872 fused_ordering(262) 00:11:10.872 fused_ordering(263) 00:11:10.872 fused_ordering(264) 00:11:10.872 fused_ordering(265) 00:11:10.872 fused_ordering(266) 00:11:10.872 fused_ordering(267) 00:11:10.872 fused_ordering(268) 00:11:10.872 fused_ordering(269) 00:11:10.872 fused_ordering(270) 00:11:10.872 fused_ordering(271) 00:11:10.872 fused_ordering(272) 00:11:10.872 fused_ordering(273) 00:11:10.872 fused_ordering(274) 00:11:10.872 fused_ordering(275) 00:11:10.872 fused_ordering(276) 00:11:10.872 fused_ordering(277) 00:11:10.872 fused_ordering(278) 00:11:10.872 fused_ordering(279) 00:11:10.872 fused_ordering(280) 00:11:10.872 fused_ordering(281) 00:11:10.872 fused_ordering(282) 00:11:10.872 fused_ordering(283) 00:11:10.872 fused_ordering(284) 00:11:10.872 fused_ordering(285) 00:11:10.872 fused_ordering(286) 00:11:10.872 fused_ordering(287) 00:11:10.872 fused_ordering(288) 00:11:10.872 fused_ordering(289) 00:11:10.872 fused_ordering(290) 00:11:10.872 fused_ordering(291) 00:11:10.872 fused_ordering(292) 00:11:10.872 fused_ordering(293) 00:11:10.872 fused_ordering(294) 00:11:10.872 fused_ordering(295) 00:11:10.872 fused_ordering(296) 00:11:10.872 fused_ordering(297) 00:11:10.872 fused_ordering(298) 00:11:10.872 fused_ordering(299) 00:11:10.872 fused_ordering(300) 00:11:10.872 fused_ordering(301) 00:11:10.872 fused_ordering(302) 00:11:10.872 fused_ordering(303) 00:11:10.872 fused_ordering(304) 00:11:10.872 fused_ordering(305) 00:11:10.872 fused_ordering(306) 00:11:10.872 fused_ordering(307) 00:11:10.872 fused_ordering(308) 00:11:10.872 fused_ordering(309) 00:11:10.872 fused_ordering(310) 00:11:10.872 fused_ordering(311) 00:11:10.872 fused_ordering(312) 00:11:10.872 fused_ordering(313) 00:11:10.872 fused_ordering(314) 00:11:10.872 fused_ordering(315) 00:11:10.872 fused_ordering(316) 00:11:10.872 fused_ordering(317) 00:11:10.872 fused_ordering(318) 00:11:10.872 fused_ordering(319) 00:11:10.872 fused_ordering(320) 00:11:10.872 fused_ordering(321) 00:11:10.872 fused_ordering(322) 00:11:10.872 fused_ordering(323) 00:11:10.872 fused_ordering(324) 00:11:10.872 fused_ordering(325) 00:11:10.872 fused_ordering(326) 00:11:10.872 fused_ordering(327) 00:11:10.872 fused_ordering(328) 00:11:10.872 fused_ordering(329) 00:11:10.872 fused_ordering(330) 00:11:10.872 fused_ordering(331) 00:11:10.872 fused_ordering(332) 00:11:10.872 fused_ordering(333) 00:11:10.872 fused_ordering(334) 00:11:10.872 fused_ordering(335) 00:11:10.872 fused_ordering(336) 00:11:10.872 fused_ordering(337) 00:11:10.872 fused_ordering(338) 00:11:10.872 fused_ordering(339) 00:11:10.872 fused_ordering(340) 00:11:10.872 fused_ordering(341) 00:11:10.872 fused_ordering(342) 00:11:10.872 fused_ordering(343) 00:11:10.872 fused_ordering(344) 00:11:10.872 fused_ordering(345) 00:11:10.872 fused_ordering(346) 00:11:10.872 fused_ordering(347) 00:11:10.872 fused_ordering(348) 00:11:10.872 fused_ordering(349) 00:11:10.872 fused_ordering(350) 00:11:10.872 fused_ordering(351) 00:11:10.872 fused_ordering(352) 00:11:10.872 fused_ordering(353) 00:11:10.872 fused_ordering(354) 00:11:10.872 fused_ordering(355) 00:11:10.872 fused_ordering(356) 00:11:10.872 fused_ordering(357) 00:11:10.872 fused_ordering(358) 00:11:10.872 fused_ordering(359) 00:11:10.872 fused_ordering(360) 00:11:10.872 fused_ordering(361) 00:11:10.872 fused_ordering(362) 00:11:10.872 fused_ordering(363) 00:11:10.872 fused_ordering(364) 00:11:10.872 fused_ordering(365) 00:11:10.872 fused_ordering(366) 00:11:10.872 fused_ordering(367) 00:11:10.872 fused_ordering(368) 00:11:10.872 fused_ordering(369) 00:11:10.872 fused_ordering(370) 00:11:10.872 fused_ordering(371) 00:11:10.872 fused_ordering(372) 00:11:10.872 fused_ordering(373) 00:11:10.872 fused_ordering(374) 00:11:10.872 fused_ordering(375) 00:11:10.872 fused_ordering(376) 00:11:10.872 fused_ordering(377) 00:11:10.872 fused_ordering(378) 00:11:10.872 fused_ordering(379) 00:11:10.872 fused_ordering(380) 00:11:10.872 fused_ordering(381) 00:11:10.872 fused_ordering(382) 00:11:10.872 fused_ordering(383) 00:11:10.872 fused_ordering(384) 00:11:10.872 fused_ordering(385) 00:11:10.872 fused_ordering(386) 00:11:10.872 fused_ordering(387) 00:11:10.872 fused_ordering(388) 00:11:10.872 fused_ordering(389) 00:11:10.872 fused_ordering(390) 00:11:10.872 fused_ordering(391) 00:11:10.872 fused_ordering(392) 00:11:10.872 fused_ordering(393) 00:11:10.872 fused_ordering(394) 00:11:10.872 fused_ordering(395) 00:11:10.872 fused_ordering(396) 00:11:10.872 fused_ordering(397) 00:11:10.872 fused_ordering(398) 00:11:10.872 fused_ordering(399) 00:11:10.872 fused_ordering(400) 00:11:10.872 fused_ordering(401) 00:11:10.872 fused_ordering(402) 00:11:10.872 fused_ordering(403) 00:11:10.872 fused_ordering(404) 00:11:10.872 fused_ordering(405) 00:11:10.872 fused_ordering(406) 00:11:10.872 fused_ordering(407) 00:11:10.872 fused_ordering(408) 00:11:10.872 fused_ordering(409) 00:11:10.872 fused_ordering(410) 00:11:11.248 fused_ordering(411) 00:11:11.248 fused_ordering(412) 00:11:11.248 fused_ordering(413) 00:11:11.248 fused_ordering(414) 00:11:11.248 fused_ordering(415) 00:11:11.248 fused_ordering(416) 00:11:11.248 fused_ordering(417) 00:11:11.248 fused_ordering(418) 00:11:11.248 fused_ordering(419) 00:11:11.248 fused_ordering(420) 00:11:11.248 fused_ordering(421) 00:11:11.248 fused_ordering(422) 00:11:11.248 fused_ordering(423) 00:11:11.248 fused_ordering(424) 00:11:11.248 fused_ordering(425) 00:11:11.248 fused_ordering(426) 00:11:11.248 fused_ordering(427) 00:11:11.248 fused_ordering(428) 00:11:11.248 fused_ordering(429) 00:11:11.248 fused_ordering(430) 00:11:11.248 fused_ordering(431) 00:11:11.248 fused_ordering(432) 00:11:11.248 fused_ordering(433) 00:11:11.248 fused_ordering(434) 00:11:11.248 fused_ordering(435) 00:11:11.248 fused_ordering(436) 00:11:11.248 fused_ordering(437) 00:11:11.248 fused_ordering(438) 00:11:11.248 fused_ordering(439) 00:11:11.248 fused_ordering(440) 00:11:11.248 fused_ordering(441) 00:11:11.248 fused_ordering(442) 00:11:11.248 fused_ordering(443) 00:11:11.248 fused_ordering(444) 00:11:11.248 fused_ordering(445) 00:11:11.248 fused_ordering(446) 00:11:11.248 fused_ordering(447) 00:11:11.248 fused_ordering(448) 00:11:11.248 fused_ordering(449) 00:11:11.248 fused_ordering(450) 00:11:11.248 fused_ordering(451) 00:11:11.248 fused_ordering(452) 00:11:11.248 fused_ordering(453) 00:11:11.248 fused_ordering(454) 00:11:11.248 fused_ordering(455) 00:11:11.248 fused_ordering(456) 00:11:11.248 fused_ordering(457) 00:11:11.248 fused_ordering(458) 00:11:11.248 fused_ordering(459) 00:11:11.248 fused_ordering(460) 00:11:11.248 fused_ordering(461) 00:11:11.248 fused_ordering(462) 00:11:11.248 fused_ordering(463) 00:11:11.248 fused_ordering(464) 00:11:11.248 fused_ordering(465) 00:11:11.248 fused_ordering(466) 00:11:11.248 fused_ordering(467) 00:11:11.248 fused_ordering(468) 00:11:11.248 fused_ordering(469) 00:11:11.248 fused_ordering(470) 00:11:11.248 fused_ordering(471) 00:11:11.248 fused_ordering(472) 00:11:11.248 fused_ordering(473) 00:11:11.248 fused_ordering(474) 00:11:11.248 fused_ordering(475) 00:11:11.248 fused_ordering(476) 00:11:11.248 fused_ordering(477) 00:11:11.248 fused_ordering(478) 00:11:11.248 fused_ordering(479) 00:11:11.248 fused_ordering(480) 00:11:11.248 fused_ordering(481) 00:11:11.248 fused_ordering(482) 00:11:11.248 fused_ordering(483) 00:11:11.248 fused_ordering(484) 00:11:11.248 fused_ordering(485) 00:11:11.248 fused_ordering(486) 00:11:11.248 fused_ordering(487) 00:11:11.248 fused_ordering(488) 00:11:11.248 fused_ordering(489) 00:11:11.248 fused_ordering(490) 00:11:11.248 fused_ordering(491) 00:11:11.248 fused_ordering(492) 00:11:11.248 fused_ordering(493) 00:11:11.248 fused_ordering(494) 00:11:11.248 fused_ordering(495) 00:11:11.248 fused_ordering(496) 00:11:11.248 fused_ordering(497) 00:11:11.248 fused_ordering(498) 00:11:11.248 fused_ordering(499) 00:11:11.248 fused_ordering(500) 00:11:11.248 fused_ordering(501) 00:11:11.248 fused_ordering(502) 00:11:11.248 fused_ordering(503) 00:11:11.248 fused_ordering(504) 00:11:11.248 fused_ordering(505) 00:11:11.248 fused_ordering(506) 00:11:11.248 fused_ordering(507) 00:11:11.248 fused_ordering(508) 00:11:11.248 fused_ordering(509) 00:11:11.248 fused_ordering(510) 00:11:11.248 fused_ordering(511) 00:11:11.248 fused_ordering(512) 00:11:11.248 fused_ordering(513) 00:11:11.248 fused_ordering(514) 00:11:11.248 fused_ordering(515) 00:11:11.248 fused_ordering(516) 00:11:11.248 fused_ordering(517) 00:11:11.248 fused_ordering(518) 00:11:11.248 fused_ordering(519) 00:11:11.248 fused_ordering(520) 00:11:11.248 fused_ordering(521) 00:11:11.248 fused_ordering(522) 00:11:11.248 fused_ordering(523) 00:11:11.248 fused_ordering(524) 00:11:11.248 fused_ordering(525) 00:11:11.248 fused_ordering(526) 00:11:11.248 fused_ordering(527) 00:11:11.248 fused_ordering(528) 00:11:11.248 fused_ordering(529) 00:11:11.248 fused_ordering(530) 00:11:11.248 fused_ordering(531) 00:11:11.248 fused_ordering(532) 00:11:11.248 fused_ordering(533) 00:11:11.248 fused_ordering(534) 00:11:11.248 fused_ordering(535) 00:11:11.248 fused_ordering(536) 00:11:11.248 fused_ordering(537) 00:11:11.248 fused_ordering(538) 00:11:11.248 fused_ordering(539) 00:11:11.248 fused_ordering(540) 00:11:11.249 fused_ordering(541) 00:11:11.249 fused_ordering(542) 00:11:11.249 fused_ordering(543) 00:11:11.249 fused_ordering(544) 00:11:11.249 fused_ordering(545) 00:11:11.249 fused_ordering(546) 00:11:11.249 fused_ordering(547) 00:11:11.249 fused_ordering(548) 00:11:11.249 fused_ordering(549) 00:11:11.249 fused_ordering(550) 00:11:11.249 fused_ordering(551) 00:11:11.249 fused_ordering(552) 00:11:11.249 fused_ordering(553) 00:11:11.249 fused_ordering(554) 00:11:11.249 fused_ordering(555) 00:11:11.249 fused_ordering(556) 00:11:11.249 fused_ordering(557) 00:11:11.249 fused_ordering(558) 00:11:11.249 fused_ordering(559) 00:11:11.249 fused_ordering(560) 00:11:11.249 fused_ordering(561) 00:11:11.249 fused_ordering(562) 00:11:11.249 fused_ordering(563) 00:11:11.249 fused_ordering(564) 00:11:11.249 fused_ordering(565) 00:11:11.249 fused_ordering(566) 00:11:11.249 fused_ordering(567) 00:11:11.249 fused_ordering(568) 00:11:11.249 fused_ordering(569) 00:11:11.249 fused_ordering(570) 00:11:11.249 fused_ordering(571) 00:11:11.249 fused_ordering(572) 00:11:11.249 fused_ordering(573) 00:11:11.249 fused_ordering(574) 00:11:11.249 fused_ordering(575) 00:11:11.249 fused_ordering(576) 00:11:11.249 fused_ordering(577) 00:11:11.249 fused_ordering(578) 00:11:11.249 fused_ordering(579) 00:11:11.249 fused_ordering(580) 00:11:11.249 fused_ordering(581) 00:11:11.249 fused_ordering(582) 00:11:11.249 fused_ordering(583) 00:11:11.249 fused_ordering(584) 00:11:11.249 fused_ordering(585) 00:11:11.249 fused_ordering(586) 00:11:11.249 fused_ordering(587) 00:11:11.249 fused_ordering(588) 00:11:11.249 fused_ordering(589) 00:11:11.249 fused_ordering(590) 00:11:11.249 fused_ordering(591) 00:11:11.249 fused_ordering(592) 00:11:11.249 fused_ordering(593) 00:11:11.249 fused_ordering(594) 00:11:11.249 fused_ordering(595) 00:11:11.249 fused_ordering(596) 00:11:11.249 fused_ordering(597) 00:11:11.249 fused_ordering(598) 00:11:11.249 fused_ordering(599) 00:11:11.249 fused_ordering(600) 00:11:11.249 fused_ordering(601) 00:11:11.249 fused_ordering(602) 00:11:11.249 fused_ordering(603) 00:11:11.249 fused_ordering(604) 00:11:11.249 fused_ordering(605) 00:11:11.249 fused_ordering(606) 00:11:11.249 fused_ordering(607) 00:11:11.249 fused_ordering(608) 00:11:11.249 fused_ordering(609) 00:11:11.249 fused_ordering(610) 00:11:11.249 fused_ordering(611) 00:11:11.249 fused_ordering(612) 00:11:11.249 fused_ordering(613) 00:11:11.249 fused_ordering(614) 00:11:11.249 fused_ordering(615) 00:11:11.820 fused_ordering(616) 00:11:11.820 fused_ordering(617) 00:11:11.820 fused_ordering(618) 00:11:11.820 fused_ordering(619) 00:11:11.820 fused_ordering(620) 00:11:11.820 fused_ordering(621) 00:11:11.820 fused_ordering(622) 00:11:11.820 fused_ordering(623) 00:11:11.820 fused_ordering(624) 00:11:11.820 fused_ordering(625) 00:11:11.820 fused_ordering(626) 00:11:11.820 fused_ordering(627) 00:11:11.820 fused_ordering(628) 00:11:11.820 fused_ordering(629) 00:11:11.820 fused_ordering(630) 00:11:11.820 fused_ordering(631) 00:11:11.820 fused_ordering(632) 00:11:11.820 fused_ordering(633) 00:11:11.820 fused_ordering(634) 00:11:11.820 fused_ordering(635) 00:11:11.820 fused_ordering(636) 00:11:11.820 fused_ordering(637) 00:11:11.820 fused_ordering(638) 00:11:11.820 fused_ordering(639) 00:11:11.820 fused_ordering(640) 00:11:11.820 fused_ordering(641) 00:11:11.820 fused_ordering(642) 00:11:11.820 fused_ordering(643) 00:11:11.820 fused_ordering(644) 00:11:11.820 fused_ordering(645) 00:11:11.820 fused_ordering(646) 00:11:11.820 fused_ordering(647) 00:11:11.820 fused_ordering(648) 00:11:11.820 fused_ordering(649) 00:11:11.820 fused_ordering(650) 00:11:11.820 fused_ordering(651) 00:11:11.820 fused_ordering(652) 00:11:11.820 fused_ordering(653) 00:11:11.820 fused_ordering(654) 00:11:11.820 fused_ordering(655) 00:11:11.820 fused_ordering(656) 00:11:11.820 fused_ordering(657) 00:11:11.820 fused_ordering(658) 00:11:11.820 fused_ordering(659) 00:11:11.820 fused_ordering(660) 00:11:11.820 fused_ordering(661) 00:11:11.820 fused_ordering(662) 00:11:11.820 fused_ordering(663) 00:11:11.820 fused_ordering(664) 00:11:11.820 fused_ordering(665) 00:11:11.820 fused_ordering(666) 00:11:11.820 fused_ordering(667) 00:11:11.820 fused_ordering(668) 00:11:11.820 fused_ordering(669) 00:11:11.820 fused_ordering(670) 00:11:11.820 fused_ordering(671) 00:11:11.820 fused_ordering(672) 00:11:11.820 fused_ordering(673) 00:11:11.820 fused_ordering(674) 00:11:11.820 fused_ordering(675) 00:11:11.820 fused_ordering(676) 00:11:11.820 fused_ordering(677) 00:11:11.820 fused_ordering(678) 00:11:11.820 fused_ordering(679) 00:11:11.820 fused_ordering(680) 00:11:11.820 fused_ordering(681) 00:11:11.820 fused_ordering(682) 00:11:11.820 fused_ordering(683) 00:11:11.820 fused_ordering(684) 00:11:11.820 fused_ordering(685) 00:11:11.820 fused_ordering(686) 00:11:11.820 fused_ordering(687) 00:11:11.820 fused_ordering(688) 00:11:11.820 fused_ordering(689) 00:11:11.820 fused_ordering(690) 00:11:11.820 fused_ordering(691) 00:11:11.820 fused_ordering(692) 00:11:11.820 fused_ordering(693) 00:11:11.820 fused_ordering(694) 00:11:11.820 fused_ordering(695) 00:11:11.820 fused_ordering(696) 00:11:11.820 fused_ordering(697) 00:11:11.820 fused_ordering(698) 00:11:11.820 fused_ordering(699) 00:11:11.820 fused_ordering(700) 00:11:11.820 fused_ordering(701) 00:11:11.820 fused_ordering(702) 00:11:11.820 fused_ordering(703) 00:11:11.820 fused_ordering(704) 00:11:11.820 fused_ordering(705) 00:11:11.820 fused_ordering(706) 00:11:11.820 fused_ordering(707) 00:11:11.820 fused_ordering(708) 00:11:11.820 fused_ordering(709) 00:11:11.820 fused_ordering(710) 00:11:11.820 fused_ordering(711) 00:11:11.820 fused_ordering(712) 00:11:11.820 fused_ordering(713) 00:11:11.820 fused_ordering(714) 00:11:11.820 fused_ordering(715) 00:11:11.820 fused_ordering(716) 00:11:11.820 fused_ordering(717) 00:11:11.820 fused_ordering(718) 00:11:11.820 fused_ordering(719) 00:11:11.820 fused_ordering(720) 00:11:11.820 fused_ordering(721) 00:11:11.820 fused_ordering(722) 00:11:11.820 fused_ordering(723) 00:11:11.820 fused_ordering(724) 00:11:11.820 fused_ordering(725) 00:11:11.820 fused_ordering(726) 00:11:11.820 fused_ordering(727) 00:11:11.820 fused_ordering(728) 00:11:11.820 fused_ordering(729) 00:11:11.820 fused_ordering(730) 00:11:11.820 fused_ordering(731) 00:11:11.820 fused_ordering(732) 00:11:11.820 fused_ordering(733) 00:11:11.820 fused_ordering(734) 00:11:11.820 fused_ordering(735) 00:11:11.820 fused_ordering(736) 00:11:11.820 fused_ordering(737) 00:11:11.820 fused_ordering(738) 00:11:11.820 fused_ordering(739) 00:11:11.820 fused_ordering(740) 00:11:11.820 fused_ordering(741) 00:11:11.820 fused_ordering(742) 00:11:11.820 fused_ordering(743) 00:11:11.820 fused_ordering(744) 00:11:11.820 fused_ordering(745) 00:11:11.820 fused_ordering(746) 00:11:11.821 fused_ordering(747) 00:11:11.821 fused_ordering(748) 00:11:11.821 fused_ordering(749) 00:11:11.821 fused_ordering(750) 00:11:11.821 fused_ordering(751) 00:11:11.821 fused_ordering(752) 00:11:11.821 fused_ordering(753) 00:11:11.821 fused_ordering(754) 00:11:11.821 fused_ordering(755) 00:11:11.821 fused_ordering(756) 00:11:11.821 fused_ordering(757) 00:11:11.821 fused_ordering(758) 00:11:11.821 fused_ordering(759) 00:11:11.821 fused_ordering(760) 00:11:11.821 fused_ordering(761) 00:11:11.821 fused_ordering(762) 00:11:11.821 fused_ordering(763) 00:11:11.821 fused_ordering(764) 00:11:11.821 fused_ordering(765) 00:11:11.821 fused_ordering(766) 00:11:11.821 fused_ordering(767) 00:11:11.821 fused_ordering(768) 00:11:11.821 fused_ordering(769) 00:11:11.821 fused_ordering(770) 00:11:11.821 fused_ordering(771) 00:11:11.821 fused_ordering(772) 00:11:11.821 fused_ordering(773) 00:11:11.821 fused_ordering(774) 00:11:11.821 fused_ordering(775) 00:11:11.821 fused_ordering(776) 00:11:11.821 fused_ordering(777) 00:11:11.821 fused_ordering(778) 00:11:11.821 fused_ordering(779) 00:11:11.821 fused_ordering(780) 00:11:11.821 fused_ordering(781) 00:11:11.821 fused_ordering(782) 00:11:11.821 fused_ordering(783) 00:11:11.821 fused_ordering(784) 00:11:11.821 fused_ordering(785) 00:11:11.821 fused_ordering(786) 00:11:11.821 fused_ordering(787) 00:11:11.821 fused_ordering(788) 00:11:11.821 fused_ordering(789) 00:11:11.821 fused_ordering(790) 00:11:11.821 fused_ordering(791) 00:11:11.821 fused_ordering(792) 00:11:11.821 fused_ordering(793) 00:11:11.821 fused_ordering(794) 00:11:11.821 fused_ordering(795) 00:11:11.821 fused_ordering(796) 00:11:11.821 fused_ordering(797) 00:11:11.821 fused_ordering(798) 00:11:11.821 fused_ordering(799) 00:11:11.821 fused_ordering(800) 00:11:11.821 fused_ordering(801) 00:11:11.821 fused_ordering(802) 00:11:11.821 fused_ordering(803) 00:11:11.821 fused_ordering(804) 00:11:11.821 fused_ordering(805) 00:11:11.821 fused_ordering(806) 00:11:11.821 fused_ordering(807) 00:11:11.821 fused_ordering(808) 00:11:11.821 fused_ordering(809) 00:11:11.821 fused_ordering(810) 00:11:11.821 fused_ordering(811) 00:11:11.821 fused_ordering(812) 00:11:11.821 fused_ordering(813) 00:11:11.821 fused_ordering(814) 00:11:11.821 fused_ordering(815) 00:11:11.821 fused_ordering(816) 00:11:11.821 fused_ordering(817) 00:11:11.821 fused_ordering(818) 00:11:11.821 fused_ordering(819) 00:11:11.821 fused_ordering(820) 00:11:12.764 fused_ordering(821) 00:11:12.764 fused_ordering(822) 00:11:12.764 fused_ordering(823) 00:11:12.764 fused_ordering(824) 00:11:12.764 fused_ordering(825) 00:11:12.764 fused_ordering(826) 00:11:12.764 fused_ordering(827) 00:11:12.764 fused_ordering(828) 00:11:12.764 fused_ordering(829) 00:11:12.764 fused_ordering(830) 00:11:12.764 fused_ordering(831) 00:11:12.764 fused_ordering(832) 00:11:12.764 fused_ordering(833) 00:11:12.764 fused_ordering(834) 00:11:12.764 fused_ordering(835) 00:11:12.764 fused_ordering(836) 00:11:12.764 fused_ordering(837) 00:11:12.764 fused_ordering(838) 00:11:12.764 fused_ordering(839) 00:11:12.764 fused_ordering(840) 00:11:12.764 fused_ordering(841) 00:11:12.764 fused_ordering(842) 00:11:12.764 fused_ordering(843) 00:11:12.764 fused_ordering(844) 00:11:12.764 fused_ordering(845) 00:11:12.764 fused_ordering(846) 00:11:12.764 fused_ordering(847) 00:11:12.764 fused_ordering(848) 00:11:12.764 fused_ordering(849) 00:11:12.764 fused_ordering(850) 00:11:12.764 fused_ordering(851) 00:11:12.764 fused_ordering(852) 00:11:12.764 fused_ordering(853) 00:11:12.764 fused_ordering(854) 00:11:12.764 fused_ordering(855) 00:11:12.764 fused_ordering(856) 00:11:12.764 fused_ordering(857) 00:11:12.764 fused_ordering(858) 00:11:12.764 fused_ordering(859) 00:11:12.764 fused_ordering(860) 00:11:12.764 fused_ordering(861) 00:11:12.764 fused_ordering(862) 00:11:12.764 fused_ordering(863) 00:11:12.764 fused_ordering(864) 00:11:12.764 fused_ordering(865) 00:11:12.764 fused_ordering(866) 00:11:12.764 fused_ordering(867) 00:11:12.764 fused_ordering(868) 00:11:12.764 fused_ordering(869) 00:11:12.764 fused_ordering(870) 00:11:12.764 fused_ordering(871) 00:11:12.764 fused_ordering(872) 00:11:12.764 fused_ordering(873) 00:11:12.764 fused_ordering(874) 00:11:12.764 fused_ordering(875) 00:11:12.764 fused_ordering(876) 00:11:12.764 fused_ordering(877) 00:11:12.764 fused_ordering(878) 00:11:12.764 fused_ordering(879) 00:11:12.764 fused_ordering(880) 00:11:12.764 fused_ordering(881) 00:11:12.764 fused_ordering(882) 00:11:12.764 fused_ordering(883) 00:11:12.764 fused_ordering(884) 00:11:12.764 fused_ordering(885) 00:11:12.764 fused_ordering(886) 00:11:12.764 fused_ordering(887) 00:11:12.764 fused_ordering(888) 00:11:12.764 fused_ordering(889) 00:11:12.764 fused_ordering(890) 00:11:12.764 fused_ordering(891) 00:11:12.764 fused_ordering(892) 00:11:12.764 fused_ordering(893) 00:11:12.764 fused_ordering(894) 00:11:12.764 fused_ordering(895) 00:11:12.764 fused_ordering(896) 00:11:12.764 fused_ordering(897) 00:11:12.764 fused_ordering(898) 00:11:12.764 fused_ordering(899) 00:11:12.764 fused_ordering(900) 00:11:12.764 fused_ordering(901) 00:11:12.764 fused_ordering(902) 00:11:12.764 fused_ordering(903) 00:11:12.764 fused_ordering(904) 00:11:12.764 fused_ordering(905) 00:11:12.764 fused_ordering(906) 00:11:12.764 fused_ordering(907) 00:11:12.764 fused_ordering(908) 00:11:12.764 fused_ordering(909) 00:11:12.764 fused_ordering(910) 00:11:12.764 fused_ordering(911) 00:11:12.764 fused_ordering(912) 00:11:12.764 fused_ordering(913) 00:11:12.764 fused_ordering(914) 00:11:12.764 fused_ordering(915) 00:11:12.764 fused_ordering(916) 00:11:12.764 fused_ordering(917) 00:11:12.764 fused_ordering(918) 00:11:12.764 fused_ordering(919) 00:11:12.764 fused_ordering(920) 00:11:12.764 fused_ordering(921) 00:11:12.764 fused_ordering(922) 00:11:12.764 fused_ordering(923) 00:11:12.764 fused_ordering(924) 00:11:12.764 fused_ordering(925) 00:11:12.764 fused_ordering(926) 00:11:12.764 fused_ordering(927) 00:11:12.764 fused_ordering(928) 00:11:12.764 fused_ordering(929) 00:11:12.764 fused_ordering(930) 00:11:12.764 fused_ordering(931) 00:11:12.764 fused_ordering(932) 00:11:12.764 fused_ordering(933) 00:11:12.764 fused_ordering(934) 00:11:12.764 fused_ordering(935) 00:11:12.764 fused_ordering(936) 00:11:12.764 fused_ordering(937) 00:11:12.764 fused_ordering(938) 00:11:12.764 fused_ordering(939) 00:11:12.764 fused_ordering(940) 00:11:12.764 fused_ordering(941) 00:11:12.764 fused_ordering(942) 00:11:12.764 fused_ordering(943) 00:11:12.764 fused_ordering(944) 00:11:12.764 fused_ordering(945) 00:11:12.764 fused_ordering(946) 00:11:12.764 fused_ordering(947) 00:11:12.764 fused_ordering(948) 00:11:12.764 fused_ordering(949) 00:11:12.764 fused_ordering(950) 00:11:12.764 fused_ordering(951) 00:11:12.765 fused_ordering(952) 00:11:12.765 fused_ordering(953) 00:11:12.765 fused_ordering(954) 00:11:12.765 fused_ordering(955) 00:11:12.765 fused_ordering(956) 00:11:12.765 fused_ordering(957) 00:11:12.765 fused_ordering(958) 00:11:12.765 fused_ordering(959) 00:11:12.765 fused_ordering(960) 00:11:12.765 fused_ordering(961) 00:11:12.765 fused_ordering(962) 00:11:12.765 fused_ordering(963) 00:11:12.765 fused_ordering(964) 00:11:12.765 fused_ordering(965) 00:11:12.765 fused_ordering(966) 00:11:12.765 fused_ordering(967) 00:11:12.765 fused_ordering(968) 00:11:12.765 fused_ordering(969) 00:11:12.765 fused_ordering(970) 00:11:12.765 fused_ordering(971) 00:11:12.765 fused_ordering(972) 00:11:12.765 fused_ordering(973) 00:11:12.765 fused_ordering(974) 00:11:12.765 fused_ordering(975) 00:11:12.765 fused_ordering(976) 00:11:12.765 fused_ordering(977) 00:11:12.765 fused_ordering(978) 00:11:12.765 fused_ordering(979) 00:11:12.765 fused_ordering(980) 00:11:12.765 fused_ordering(981) 00:11:12.765 fused_ordering(982) 00:11:12.765 fused_ordering(983) 00:11:12.765 fused_ordering(984) 00:11:12.765 fused_ordering(985) 00:11:12.765 fused_ordering(986) 00:11:12.765 fused_ordering(987) 00:11:12.765 fused_ordering(988) 00:11:12.765 fused_ordering(989) 00:11:12.765 fused_ordering(990) 00:11:12.765 fused_ordering(991) 00:11:12.765 fused_ordering(992) 00:11:12.765 fused_ordering(993) 00:11:12.765 fused_ordering(994) 00:11:12.765 fused_ordering(995) 00:11:12.765 fused_ordering(996) 00:11:12.765 fused_ordering(997) 00:11:12.765 fused_ordering(998) 00:11:12.765 fused_ordering(999) 00:11:12.765 fused_ordering(1000) 00:11:12.765 fused_ordering(1001) 00:11:12.765 fused_ordering(1002) 00:11:12.765 fused_ordering(1003) 00:11:12.765 fused_ordering(1004) 00:11:12.765 fused_ordering(1005) 00:11:12.765 fused_ordering(1006) 00:11:12.765 fused_ordering(1007) 00:11:12.765 fused_ordering(1008) 00:11:12.765 fused_ordering(1009) 00:11:12.765 fused_ordering(1010) 00:11:12.765 fused_ordering(1011) 00:11:12.765 fused_ordering(1012) 00:11:12.765 fused_ordering(1013) 00:11:12.765 fused_ordering(1014) 00:11:12.765 fused_ordering(1015) 00:11:12.765 fused_ordering(1016) 00:11:12.765 fused_ordering(1017) 00:11:12.765 fused_ordering(1018) 00:11:12.765 fused_ordering(1019) 00:11:12.765 fused_ordering(1020) 00:11:12.765 fused_ordering(1021) 00:11:12.765 fused_ordering(1022) 00:11:12.765 fused_ordering(1023) 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:12.765 rmmod nvme_tcp 00:11:12.765 rmmod nvme_fabrics 00:11:12.765 rmmod nvme_keyring 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 994184 ']' 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 994184 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 994184 ']' 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 994184 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 994184 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 994184' 00:11:12.765 killing process with pid 994184 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 994184 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 994184 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.765 09:25:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.309 09:25:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:15.309 00:11:15.309 real 0m12.618s 00:11:15.309 user 0m7.003s 00:11:15.309 sys 0m6.573s 00:11:15.309 09:25:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:15.309 09:25:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.309 ************************************ 00:11:15.309 END TEST nvmf_fused_ordering 00:11:15.309 ************************************ 00:11:15.309 09:25:46 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:15.309 09:25:46 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:15.309 09:25:46 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:15.309 09:25:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:15.309 ************************************ 00:11:15.309 START TEST nvmf_delete_subsystem 00:11:15.309 ************************************ 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:15.309 * Looking for test storage... 00:11:15.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:15.309 09:25:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:21.900 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:21.900 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.900 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:21.901 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:21.901 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:21.901 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:22.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:11:22.162 00:11:22.162 --- 10.0.0.2 ping statistics --- 00:11:22.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.162 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:11:22.162 00:11:22.162 --- 10.0.0.1 ping statistics --- 00:11:22.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.162 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=999047 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 999047 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 999047 ']' 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:22.162 09:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.162 [2024-06-11 09:25:53.892015] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:11:22.162 [2024-06-11 09:25:53.892098] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.162 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.423 [2024-06-11 09:25:53.977774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:22.423 [2024-06-11 09:25:54.072940] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.423 [2024-06-11 09:25:54.072995] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.423 [2024-06-11 09:25:54.073004] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.423 [2024-06-11 09:25:54.073010] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.423 [2024-06-11 09:25:54.073016] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.423 [2024-06-11 09:25:54.073175] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.423 [2024-06-11 09:25:54.073180] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.994 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:22.994 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:11:22.994 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:22.994 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:22.994 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.994 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.994 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.994 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:22.994 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.994 [2024-06-11 09:25:54.794600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.994 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:22.994 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:22.994 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:22.994 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.255 [2024-06-11 09:25:54.818765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.255 NULL1 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.255 Delay0 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=999344 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:23.255 09:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:23.255 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.255 [2024-06-11 09:25:54.915403] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:25.168 09:25:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.168 09:25:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:25.168 09:25:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 [2024-06-11 09:25:57.081138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e19c80 is same with the state(5) to be set 00:11:25.429 starting I/O failed: -6 00:11:25.429 starting I/O failed: -6 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 Read completed with error (sct=0, sc=8) 00:11:25.429 starting I/O failed: -6 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.429 Write completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 starting I/O failed: -6 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 starting I/O failed: -6 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 starting I/O failed: -6 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 starting I/O failed: -6 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 starting I/O failed: -6 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 starting I/O failed: -6 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 [2024-06-11 09:25:57.084472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f155c00c470 is same with the state(5) to be set 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:25.430 Write completed with error (sct=0, sc=8) 00:11:25.430 Read completed with error (sct=0, sc=8) 00:11:26.373 [2024-06-11 09:25:58.054761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9550 is same with the state(5) to be set 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 [2024-06-11 09:25:58.085205] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e19e60 is same with the state(5) to be set 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 [2024-06-11 09:25:58.086220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a220 is same with the state(5) to be set 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 [2024-06-11 09:25:58.087284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f155c00bfe0 is same with the state(5) to be set 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Read completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 Write completed with error (sct=0, sc=8) 00:11:26.373 [2024-06-11 09:25:58.087372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f155c00c780 is same with the state(5) to be set 00:11:26.373 Initializing NVMe Controllers 00:11:26.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:26.373 Controller IO queue size 128, less than required. 00:11:26.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:26.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:26.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:26.373 Initialization complete. Launching workers. 00:11:26.373 ======================================================== 00:11:26.373 Latency(us) 00:11:26.373 Device Information : IOPS MiB/s Average min max 00:11:26.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 183.21 0.09 908390.99 277.30 1008993.00 00:11:26.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.36 0.07 944898.46 233.40 1010422.80 00:11:26.373 ======================================================== 00:11:26.373 Total : 332.57 0.16 924786.56 233.40 1010422.80 00:11:26.373 00:11:26.373 [2024-06-11 09:25:58.087942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df9550 (9): Bad file descriptor 00:11:26.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:26.373 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:26.373 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:26.373 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 999344 00:11:26.373 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 999344 00:11:26.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (999344) - No such process 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 999344 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 999344 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 999344 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.944 [2024-06-11 09:25:58.619766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.944 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:26.945 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1000069 00:11:26.945 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:26.945 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:26.945 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000069 00:11:26.945 09:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:26.945 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.945 [2024-06-11 09:25:58.686147] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:27.517 09:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:27.517 09:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000069 00:11:27.517 09:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:28.087 09:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:28.087 09:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000069 00:11:28.087 09:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:28.347 09:26:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:28.347 09:26:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000069 00:11:28.347 09:26:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:28.917 09:26:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:28.917 09:26:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000069 00:11:28.917 09:26:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:29.487 09:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:29.487 09:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000069 00:11:29.487 09:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:30.057 09:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:30.057 09:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000069 00:11:30.057 09:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:30.318 Initializing NVMe Controllers 00:11:30.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:30.318 Controller IO queue size 128, less than required. 00:11:30.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:30.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:30.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:30.318 Initialization complete. Launching workers. 00:11:30.318 ======================================================== 00:11:30.318 Latency(us) 00:11:30.318 Device Information : IOPS MiB/s Average min max 00:11:30.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002393.06 1000259.88 1043437.49 00:11:30.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004160.28 1000270.04 1011329.41 00:11:30.318 ======================================================== 00:11:30.318 Total : 256.00 0.12 1003276.67 1000259.88 1043437.49 00:11:30.318 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1000069 00:11:30.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1000069) - No such process 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1000069 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:30.578 rmmod nvme_tcp 00:11:30.578 rmmod nvme_fabrics 00:11:30.578 rmmod nvme_keyring 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 999047 ']' 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 999047 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 999047 ']' 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 999047 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 999047 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 999047' 00:11:30.578 killing process with pid 999047 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 999047 00:11:30.578 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 999047 00:11:30.839 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.839 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:30.839 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:30.839 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.839 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.839 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.839 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.839 09:26:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.756 09:26:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:32.756 00:11:32.756 real 0m17.843s 00:11:32.756 user 0m31.002s 00:11:32.756 sys 0m6.201s 00:11:32.756 09:26:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:32.756 09:26:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.756 ************************************ 00:11:32.756 END TEST nvmf_delete_subsystem 00:11:32.756 ************************************ 00:11:32.756 09:26:04 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:32.756 09:26:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:32.756 09:26:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:32.756 09:26:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:33.018 ************************************ 00:11:33.018 START TEST nvmf_ns_masking 00:11:33.018 ************************************ 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:33.018 * Looking for test storage... 00:11:33.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=0e519daf-67c8-46b4-9024-f3319676f984 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:33.018 09:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:39.610 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:39.610 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:39.610 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:39.610 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.610 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:39.611 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.871 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.872 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.872 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.872 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:39.872 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.872 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.872 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:40.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:11:40.133 00:11:40.133 --- 10.0.0.2 ping statistics --- 00:11:40.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.133 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:11:40.133 00:11:40.133 --- 10.0.0.1 ping statistics --- 00:11:40.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.133 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1004788 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1004788 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 1004788 ']' 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:40.133 09:26:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:40.133 [2024-06-11 09:26:11.804576] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:11:40.133 [2024-06-11 09:26:11.804639] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.133 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.133 [2024-06-11 09:26:11.895021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.395 [2024-06-11 09:26:11.992795] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.395 [2024-06-11 09:26:11.992853] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.395 [2024-06-11 09:26:11.992861] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.395 [2024-06-11 09:26:11.992868] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.395 [2024-06-11 09:26:11.992874] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.395 [2024-06-11 09:26:11.993031] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.395 [2024-06-11 09:26:11.993162] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.395 [2024-06-11 09:26:11.993354] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.395 [2024-06-11 09:26:11.993357] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.976 09:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:40.976 09:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:11:40.976 09:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:40.976 09:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:40.976 09:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:40.976 09:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.976 09:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:41.240 [2024-06-11 09:26:12.916818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.240 09:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:41.240 09:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:41.240 09:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:41.501 Malloc1 00:11:41.501 09:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:41.762 Malloc2 00:11:41.762 09:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:42.022 09:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:42.022 09:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.283 [2024-06-11 09:26:14.022157] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.283 09:26:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:11:42.283 09:26:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0e519daf-67c8-46b4-9024-f3319676f984 -a 10.0.0.2 -s 4420 -i 4 00:11:42.545 09:26:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.545 09:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:11:42.545 09:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.545 09:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:11:42.545 09:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:11:44.460 09:26:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:44.460 09:26:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:44.460 09:26:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.460 09:26:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:11:44.460 09:26:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.460 09:26:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:11:44.460 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:44.460 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:44.720 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:44.720 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:44.720 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:44.720 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:44.720 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:44.720 [ 0]:0x1 00:11:44.720 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:44.720 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:44.720 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c1dfb793ecc74ef9bdab98c264a64468 00:11:44.720 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c1dfb793ecc74ef9bdab98c264a64468 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.720 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:44.981 [ 0]:0x1 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c1dfb793ecc74ef9bdab98c264a64468 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c1dfb793ecc74ef9bdab98c264a64468 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:44.981 [ 1]:0x2 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=18275112b9ba427980a3218dbe34f5e0 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 18275112b9ba427980a3218dbe34f5e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:11:44.981 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.242 09:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.242 09:26:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:45.503 09:26:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:11:45.503 09:26:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0e519daf-67c8-46b4-9024-f3319676f984 -a 10.0.0.2 -s 4420 -i 4 00:11:45.764 09:26:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:45.764 09:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:11:45.764 09:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.764 09:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:11:45.764 09:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:11:45.764 09:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:11:47.677 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:47.677 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:47.677 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.677 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:11:47.677 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.677 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:11:47.677 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:47.677 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.966 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:47.966 [ 0]:0x2 00:11:47.967 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.967 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.967 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=18275112b9ba427980a3218dbe34f5e0 00:11:47.967 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 18275112b9ba427980a3218dbe34f5e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.967 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:48.277 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:48.277 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:48.277 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:48.277 [ 0]:0x1 00:11:48.277 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:48.277 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:48.277 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c1dfb793ecc74ef9bdab98c264a64468 00:11:48.277 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c1dfb793ecc74ef9bdab98c264a64468 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.277 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:48.277 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:48.277 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:48.277 [ 1]:0x2 00:11:48.277 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:48.277 09:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:48.277 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=18275112b9ba427980a3218dbe34f5e0 00:11:48.277 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 18275112b9ba427980a3218dbe34f5e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.277 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:48.537 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:48.537 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:48.537 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:48.537 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:48.537 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:48.537 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:48.537 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:48.537 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:48.537 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:48.537 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:48.537 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:48.537 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:48.799 [ 0]:0x2 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=18275112b9ba427980a3218dbe34f5e0 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 18275112b9ba427980a3218dbe34f5e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.799 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:49.060 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:11:49.060 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0e519daf-67c8-46b4-9024-f3319676f984 -a 10.0.0.2 -s 4420 -i 4 00:11:49.320 09:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:49.320 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:11:49.320 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.320 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:11:49.320 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:11:49.320 09:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:11:51.301 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:51.301 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:51.301 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.301 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:11:51.301 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.301 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:11:51.301 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:51.301 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:51.562 [ 0]:0x1 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c1dfb793ecc74ef9bdab98c264a64468 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c1dfb793ecc74ef9bdab98c264a64468 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:51.562 [ 1]:0x2 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=18275112b9ba427980a3218dbe34f5e0 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 18275112b9ba427980a3218dbe34f5e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.562 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:51.823 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.823 [ 0]:0x2 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=18275112b9ba427980a3218dbe34f5e0 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 18275112b9ba427980a3218dbe34f5e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:52.084 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:52.084 [2024-06-11 09:26:23.878160] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:52.084 request: 00:11:52.084 { 00:11:52.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.084 "nsid": 2, 00:11:52.084 "host": "nqn.2016-06.io.spdk:host1", 00:11:52.084 "method": "nvmf_ns_remove_host", 00:11:52.084 "req_id": 1 00:11:52.084 } 00:11:52.084 Got JSON-RPC error response 00:11:52.084 response: 00:11:52.084 { 00:11:52.084 "code": -32602, 00:11:52.084 "message": "Invalid parameters" 00:11:52.084 } 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:52.345 09:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:52.345 [ 0]:0x2 00:11:52.345 09:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:52.345 09:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:52.345 09:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=18275112b9ba427980a3218dbe34f5e0 00:11:52.345 09:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 18275112b9ba427980a3218dbe34f5e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.345 09:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:11:52.346 09:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.607 09:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.607 09:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:52.607 09:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:52.607 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:52.607 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:52.607 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:52.607 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:52.607 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:52.607 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:52.607 rmmod nvme_tcp 00:11:52.867 rmmod nvme_fabrics 00:11:52.868 rmmod nvme_keyring 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1004788 ']' 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1004788 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 1004788 ']' 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 1004788 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1004788 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1004788' 00:11:52.868 killing process with pid 1004788 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 1004788 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 1004788 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.868 09:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.413 09:26:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:55.413 00:11:55.413 real 0m22.174s 00:11:55.413 user 0m55.863s 00:11:55.413 sys 0m6.862s 00:11:55.413 09:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:55.413 09:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:55.413 ************************************ 00:11:55.413 END TEST nvmf_ns_masking 00:11:55.413 ************************************ 00:11:55.413 09:26:26 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:55.413 09:26:26 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:55.413 09:26:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:55.413 09:26:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:55.413 09:26:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:55.413 ************************************ 00:11:55.413 START TEST nvmf_nvme_cli 00:11:55.413 ************************************ 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:55.413 * Looking for test storage... 00:11:55.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:55.413 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:55.414 09:26:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:55.414 09:26:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:02.002 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:02.002 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:02.002 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:02.002 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:02.002 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:02.263 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:02.263 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:02.263 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:02.263 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:02.263 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:02.263 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:02.263 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:02.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:12:02.263 00:12:02.263 --- 10.0.0.2 ping statistics --- 00:12:02.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.263 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:12:02.263 09:26:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:02.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:12:02.263 00:12:02.263 --- 10.0.0.1 ping statistics --- 00:12:02.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.264 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1011587 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1011587 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 1011587 ']' 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:02.264 09:26:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.526 [2024-06-11 09:26:34.093956] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:12:02.526 [2024-06-11 09:26:34.094011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.526 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.526 [2024-06-11 09:26:34.180043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.526 [2024-06-11 09:26:34.276682] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.526 [2024-06-11 09:26:34.276745] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.526 [2024-06-11 09:26:34.276754] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.526 [2024-06-11 09:26:34.276760] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.526 [2024-06-11 09:26:34.276772] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.526 [2024-06-11 09:26:34.276908] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.526 [2024-06-11 09:26:34.277038] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.526 [2024-06-11 09:26:34.277207] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.526 [2024-06-11 09:26:34.277208] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.468 09:26:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:03.468 09:26:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:12:03.468 09:26:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:03.468 09:26:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:03.468 09:26:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.468 [2024-06-11 09:26:35.020143] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.468 Malloc0 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.468 Malloc1 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.468 [2024-06-11 09:26:35.109981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.468 09:26:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:03.468 00:12:03.468 Discovery Log Number of Records 2, Generation counter 2 00:12:03.468 =====Discovery Log Entry 0====== 00:12:03.468 trtype: tcp 00:12:03.468 adrfam: ipv4 00:12:03.468 subtype: current discovery subsystem 00:12:03.468 treq: not required 00:12:03.468 portid: 0 00:12:03.468 trsvcid: 4420 00:12:03.468 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:03.468 traddr: 10.0.0.2 00:12:03.468 eflags: explicit discovery connections, duplicate discovery information 00:12:03.468 sectype: none 00:12:03.468 =====Discovery Log Entry 1====== 00:12:03.468 trtype: tcp 00:12:03.468 adrfam: ipv4 00:12:03.468 subtype: nvme subsystem 00:12:03.468 treq: not required 00:12:03.468 portid: 0 00:12:03.468 trsvcid: 4420 00:12:03.468 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:03.468 traddr: 10.0.0.2 00:12:03.468 eflags: none 00:12:03.468 sectype: none 00:12:03.469 09:26:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:03.469 09:26:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:03.469 09:26:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:03.469 09:26:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.469 09:26:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:03.469 09:26:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:03.469 09:26:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.469 09:26:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:03.469 09:26:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:03.469 09:26:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:03.469 09:26:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.383 09:26:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:05.383 09:26:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:12:05.383 09:26:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.383 09:26:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:12:05.383 09:26:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:12:05.383 09:26:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:07.297 /dev/nvme0n1 ]] 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:07.297 09:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:07.297 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:07.297 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:07.297 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:07.297 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:07.297 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:07.297 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:07.297 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:07.297 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:07.297 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:07.297 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:07.297 09:26:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:07.297 09:26:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:07.558 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:07.558 rmmod nvme_tcp 00:12:07.558 rmmod nvme_fabrics 00:12:07.888 rmmod nvme_keyring 00:12:07.888 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:07.888 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:07.888 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1011587 ']' 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1011587 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 1011587 ']' 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 1011587 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1011587 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1011587' 00:12:07.889 killing process with pid 1011587 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 1011587 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 1011587 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.889 09:26:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.464 09:26:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:10.464 00:12:10.464 real 0m14.872s 00:12:10.464 user 0m23.442s 00:12:10.464 sys 0m5.856s 00:12:10.464 09:26:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:10.464 09:26:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.464 ************************************ 00:12:10.464 END TEST nvmf_nvme_cli 00:12:10.464 ************************************ 00:12:10.464 09:26:41 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:10.464 09:26:41 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:10.464 09:26:41 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:10.464 09:26:41 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:10.464 09:26:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:10.464 ************************************ 00:12:10.464 START TEST nvmf_vfio_user 00:12:10.464 ************************************ 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:10.464 * Looking for test storage... 00:12:10.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.464 09:26:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1013397 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1013397' 00:12:10.465 Process pid: 1013397 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1013397 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 1013397 ']' 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:10.465 09:26:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:10.465 [2024-06-11 09:26:41.950876] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:12:10.465 [2024-06-11 09:26:41.950937] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.465 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.465 [2024-06-11 09:26:42.034124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.465 [2024-06-11 09:26:42.106085] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.465 [2024-06-11 09:26:42.106123] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.465 [2024-06-11 09:26:42.106131] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.465 [2024-06-11 09:26:42.106138] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.465 [2024-06-11 09:26:42.106143] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.465 [2024-06-11 09:26:42.106250] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.465 [2024-06-11 09:26:42.106378] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.465 [2024-06-11 09:26:42.106662] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.465 [2024-06-11 09:26:42.106664] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.036 09:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:11.036 09:26:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:12:11.036 09:26:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:12.422 09:26:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:12.422 09:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:12.422 09:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:12.422 09:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:12.422 09:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:12.422 09:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:12.683 Malloc1 00:12:12.683 09:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:12.945 09:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:12.945 09:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:13.207 09:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:13.207 09:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:13.207 09:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:13.468 Malloc2 00:12:13.468 09:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:13.728 09:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:13.989 09:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:14.252 09:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:14.252 09:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:14.252 09:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:14.252 09:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:14.252 09:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:14.252 09:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:14.252 [2024-06-11 09:26:45.841645] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:12:14.252 [2024-06-11 09:26:45.841690] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014094 ] 00:12:14.252 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.252 [2024-06-11 09:26:45.871946] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:14.252 [2024-06-11 09:26:45.877286] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:14.252 [2024-06-11 09:26:45.877305] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f595a496000 00:12:14.252 [2024-06-11 09:26:45.878288] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:14.252 [2024-06-11 09:26:45.879292] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:14.252 [2024-06-11 09:26:45.880299] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:14.252 [2024-06-11 09:26:45.881303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:14.252 [2024-06-11 09:26:45.882312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:14.252 [2024-06-11 09:26:45.883309] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:14.252 [2024-06-11 09:26:45.884327] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:14.252 [2024-06-11 09:26:45.885331] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:14.252 [2024-06-11 09:26:45.886339] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:14.253 [2024-06-11 09:26:45.886351] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f595a48b000 00:12:14.253 [2024-06-11 09:26:45.887680] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:14.253 [2024-06-11 09:26:45.909472] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:14.253 [2024-06-11 09:26:45.909493] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:14.253 [2024-06-11 09:26:45.912514] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:14.253 [2024-06-11 09:26:45.912558] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:14.253 [2024-06-11 09:26:45.912644] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:14.253 [2024-06-11 09:26:45.912661] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:14.253 [2024-06-11 09:26:45.912667] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:14.253 [2024-06-11 09:26:45.913512] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:14.253 [2024-06-11 09:26:45.913520] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:14.253 [2024-06-11 09:26:45.913527] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:14.253 [2024-06-11 09:26:45.914514] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:14.253 [2024-06-11 09:26:45.914523] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:14.253 [2024-06-11 09:26:45.914530] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:14.253 [2024-06-11 09:26:45.915529] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:14.253 [2024-06-11 09:26:45.915537] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:14.253 [2024-06-11 09:26:45.916531] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:14.253 [2024-06-11 09:26:45.916541] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:14.253 [2024-06-11 09:26:45.916546] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:14.253 [2024-06-11 09:26:45.916553] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:14.253 [2024-06-11 09:26:45.916658] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:14.253 [2024-06-11 09:26:45.916663] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:14.253 [2024-06-11 09:26:45.916668] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:14.253 [2024-06-11 09:26:45.917541] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:14.253 [2024-06-11 09:26:45.918546] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:14.253 [2024-06-11 09:26:45.919548] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:14.253 [2024-06-11 09:26:45.920547] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:14.253 [2024-06-11 09:26:45.920627] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:14.253 [2024-06-11 09:26:45.921559] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:14.253 [2024-06-11 09:26:45.921567] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:14.253 [2024-06-11 09:26:45.921572] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.921592] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:14.253 [2024-06-11 09:26:45.921600] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.921618] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:14.253 [2024-06-11 09:26:45.921623] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:14.253 [2024-06-11 09:26:45.921637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:14.253 [2024-06-11 09:26:45.921688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:14.253 [2024-06-11 09:26:45.921697] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:14.253 [2024-06-11 09:26:45.921701] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:14.253 [2024-06-11 09:26:45.921706] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:14.253 [2024-06-11 09:26:45.921712] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:14.253 [2024-06-11 09:26:45.921717] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:14.253 [2024-06-11 09:26:45.921724] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:14.253 [2024-06-11 09:26:45.921728] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.921736] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.921746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:14.253 [2024-06-11 09:26:45.921762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:14.253 [2024-06-11 09:26:45.921773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.253 [2024-06-11 09:26:45.921781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.253 [2024-06-11 09:26:45.921789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.253 [2024-06-11 09:26:45.921797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.253 [2024-06-11 09:26:45.921802] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.921810] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.921819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:14.253 [2024-06-11 09:26:45.921828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:14.253 [2024-06-11 09:26:45.921833] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:14.253 [2024-06-11 09:26:45.921838] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.921844] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.921850] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.921859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:14.253 [2024-06-11 09:26:45.921869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:14.253 [2024-06-11 09:26:45.921919] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.921926] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.921934] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:14.253 [2024-06-11 09:26:45.921938] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:14.253 [2024-06-11 09:26:45.921944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:14.253 [2024-06-11 09:26:45.921957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:14.253 [2024-06-11 09:26:45.921967] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:14.253 [2024-06-11 09:26:45.921979] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.921986] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.921993] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:14.253 [2024-06-11 09:26:45.921997] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:14.253 [2024-06-11 09:26:45.922003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:14.253 [2024-06-11 09:26:45.922020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:14.253 [2024-06-11 09:26:45.922031] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.922039] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:14.253 [2024-06-11 09:26:45.922045] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:14.253 [2024-06-11 09:26:45.922050] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:14.254 [2024-06-11 09:26:45.922056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:14.254 [2024-06-11 09:26:45.922066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:14.254 [2024-06-11 09:26:45.922074] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:14.254 [2024-06-11 09:26:45.922080] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:14.254 [2024-06-11 09:26:45.922087] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:14.254 [2024-06-11 09:26:45.922093] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:14.254 [2024-06-11 09:26:45.922098] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:14.254 [2024-06-11 09:26:45.922103] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:14.254 [2024-06-11 09:26:45.922107] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:14.254 [2024-06-11 09:26:45.922112] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:14.254 [2024-06-11 09:26:45.922131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:14.254 [2024-06-11 09:26:45.922143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:14.254 [2024-06-11 09:26:45.922154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:14.254 [2024-06-11 09:26:45.922165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:14.254 [2024-06-11 09:26:45.922178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:14.254 [2024-06-11 09:26:45.922189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:14.254 [2024-06-11 09:26:45.922200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:14.254 [2024-06-11 09:26:45.922210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:14.254 [2024-06-11 09:26:45.922221] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:14.254 [2024-06-11 09:26:45.922225] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:14.254 [2024-06-11 09:26:45.922229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:14.254 [2024-06-11 09:26:45.922232] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:14.254 [2024-06-11 09:26:45.922238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:14.254 [2024-06-11 09:26:45.922246] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:14.254 [2024-06-11 09:26:45.922250] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:14.254 [2024-06-11 09:26:45.922256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:14.254 [2024-06-11 09:26:45.922263] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:14.254 [2024-06-11 09:26:45.922267] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:14.254 [2024-06-11 09:26:45.922273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:14.254 [2024-06-11 09:26:45.922280] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:14.254 [2024-06-11 09:26:45.922285] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:14.254 [2024-06-11 09:26:45.922290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:14.254 [2024-06-11 09:26:45.922297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:14.254 [2024-06-11 09:26:45.922309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:14.254 [2024-06-11 09:26:45.922324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:14.254 [2024-06-11 09:26:45.922333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:14.254 ===================================================== 00:12:14.254 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:14.254 ===================================================== 00:12:14.254 Controller Capabilities/Features 00:12:14.254 ================================ 00:12:14.254 Vendor ID: 4e58 00:12:14.254 Subsystem Vendor ID: 4e58 00:12:14.254 Serial Number: SPDK1 00:12:14.254 Model Number: SPDK bdev Controller 00:12:14.254 Firmware Version: 24.09 00:12:14.254 Recommended Arb Burst: 6 00:12:14.254 IEEE OUI Identifier: 8d 6b 50 00:12:14.254 Multi-path I/O 00:12:14.254 May have multiple subsystem ports: Yes 00:12:14.254 May have multiple controllers: Yes 00:12:14.254 Associated with SR-IOV VF: No 00:12:14.254 Max Data Transfer Size: 131072 00:12:14.254 Max Number of Namespaces: 32 00:12:14.254 Max Number of I/O Queues: 127 00:12:14.254 NVMe Specification Version (VS): 1.3 00:12:14.254 NVMe Specification Version (Identify): 1.3 00:12:14.254 Maximum Queue Entries: 256 00:12:14.254 Contiguous Queues Required: Yes 00:12:14.254 Arbitration Mechanisms Supported 00:12:14.254 Weighted Round Robin: Not Supported 00:12:14.254 Vendor Specific: Not Supported 00:12:14.254 Reset Timeout: 15000 ms 00:12:14.254 Doorbell Stride: 4 bytes 00:12:14.254 NVM Subsystem Reset: Not Supported 00:12:14.254 Command Sets Supported 00:12:14.254 NVM Command Set: Supported 00:12:14.254 Boot Partition: Not Supported 00:12:14.254 Memory Page Size Minimum: 4096 bytes 00:12:14.254 Memory Page Size Maximum: 4096 bytes 00:12:14.254 Persistent Memory Region: Not Supported 00:12:14.254 Optional Asynchronous Events Supported 00:12:14.254 Namespace Attribute Notices: Supported 00:12:14.254 Firmware Activation Notices: Not Supported 00:12:14.254 ANA Change Notices: Not Supported 00:12:14.254 PLE Aggregate Log Change Notices: Not Supported 00:12:14.254 LBA Status Info Alert Notices: Not Supported 00:12:14.254 EGE Aggregate Log Change Notices: Not Supported 00:12:14.254 Normal NVM Subsystem Shutdown event: Not Supported 00:12:14.254 Zone Descriptor Change Notices: Not Supported 00:12:14.254 Discovery Log Change Notices: Not Supported 00:12:14.254 Controller Attributes 00:12:14.254 128-bit Host Identifier: Supported 00:12:14.254 Non-Operational Permissive Mode: Not Supported 00:12:14.254 NVM Sets: Not Supported 00:12:14.254 Read Recovery Levels: Not Supported 00:12:14.254 Endurance Groups: Not Supported 00:12:14.254 Predictable Latency Mode: Not Supported 00:12:14.254 Traffic Based Keep ALive: Not Supported 00:12:14.254 Namespace Granularity: Not Supported 00:12:14.254 SQ Associations: Not Supported 00:12:14.254 UUID List: Not Supported 00:12:14.254 Multi-Domain Subsystem: Not Supported 00:12:14.254 Fixed Capacity Management: Not Supported 00:12:14.254 Variable Capacity Management: Not Supported 00:12:14.254 Delete Endurance Group: Not Supported 00:12:14.254 Delete NVM Set: Not Supported 00:12:14.254 Extended LBA Formats Supported: Not Supported 00:12:14.254 Flexible Data Placement Supported: Not Supported 00:12:14.254 00:12:14.254 Controller Memory Buffer Support 00:12:14.254 ================================ 00:12:14.254 Supported: No 00:12:14.254 00:12:14.254 Persistent Memory Region Support 00:12:14.254 ================================ 00:12:14.254 Supported: No 00:12:14.254 00:12:14.254 Admin Command Set Attributes 00:12:14.254 ============================ 00:12:14.254 Security Send/Receive: Not Supported 00:12:14.254 Format NVM: Not Supported 00:12:14.254 Firmware Activate/Download: Not Supported 00:12:14.254 Namespace Management: Not Supported 00:12:14.254 Device Self-Test: Not Supported 00:12:14.254 Directives: Not Supported 00:12:14.254 NVMe-MI: Not Supported 00:12:14.254 Virtualization Management: Not Supported 00:12:14.254 Doorbell Buffer Config: Not Supported 00:12:14.254 Get LBA Status Capability: Not Supported 00:12:14.254 Command & Feature Lockdown Capability: Not Supported 00:12:14.254 Abort Command Limit: 4 00:12:14.254 Async Event Request Limit: 4 00:12:14.254 Number of Firmware Slots: N/A 00:12:14.254 Firmware Slot 1 Read-Only: N/A 00:12:14.254 Firmware Activation Without Reset: N/A 00:12:14.254 Multiple Update Detection Support: N/A 00:12:14.254 Firmware Update Granularity: No Information Provided 00:12:14.254 Per-Namespace SMART Log: No 00:12:14.254 Asymmetric Namespace Access Log Page: Not Supported 00:12:14.254 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:14.254 Command Effects Log Page: Supported 00:12:14.254 Get Log Page Extended Data: Supported 00:12:14.254 Telemetry Log Pages: Not Supported 00:12:14.254 Persistent Event Log Pages: Not Supported 00:12:14.254 Supported Log Pages Log Page: May Support 00:12:14.254 Commands Supported & Effects Log Page: Not Supported 00:12:14.254 Feature Identifiers & Effects Log Page:May Support 00:12:14.254 NVMe-MI Commands & Effects Log Page: May Support 00:12:14.254 Data Area 4 for Telemetry Log: Not Supported 00:12:14.254 Error Log Page Entries Supported: 128 00:12:14.254 Keep Alive: Supported 00:12:14.254 Keep Alive Granularity: 10000 ms 00:12:14.254 00:12:14.255 NVM Command Set Attributes 00:12:14.255 ========================== 00:12:14.255 Submission Queue Entry Size 00:12:14.255 Max: 64 00:12:14.255 Min: 64 00:12:14.255 Completion Queue Entry Size 00:12:14.255 Max: 16 00:12:14.255 Min: 16 00:12:14.255 Number of Namespaces: 32 00:12:14.255 Compare Command: Supported 00:12:14.255 Write Uncorrectable Command: Not Supported 00:12:14.255 Dataset Management Command: Supported 00:12:14.255 Write Zeroes Command: Supported 00:12:14.255 Set Features Save Field: Not Supported 00:12:14.255 Reservations: Not Supported 00:12:14.255 Timestamp: Not Supported 00:12:14.255 Copy: Supported 00:12:14.255 Volatile Write Cache: Present 00:12:14.255 Atomic Write Unit (Normal): 1 00:12:14.255 Atomic Write Unit (PFail): 1 00:12:14.255 Atomic Compare & Write Unit: 1 00:12:14.255 Fused Compare & Write: Supported 00:12:14.255 Scatter-Gather List 00:12:14.255 SGL Command Set: Supported (Dword aligned) 00:12:14.255 SGL Keyed: Not Supported 00:12:14.255 SGL Bit Bucket Descriptor: Not Supported 00:12:14.255 SGL Metadata Pointer: Not Supported 00:12:14.255 Oversized SGL: Not Supported 00:12:14.255 SGL Metadata Address: Not Supported 00:12:14.255 SGL Offset: Not Supported 00:12:14.255 Transport SGL Data Block: Not Supported 00:12:14.255 Replay Protected Memory Block: Not Supported 00:12:14.255 00:12:14.255 Firmware Slot Information 00:12:14.255 ========================= 00:12:14.255 Active slot: 1 00:12:14.255 Slot 1 Firmware Revision: 24.09 00:12:14.255 00:12:14.255 00:12:14.255 Commands Supported and Effects 00:12:14.255 ============================== 00:12:14.255 Admin Commands 00:12:14.255 -------------- 00:12:14.255 Get Log Page (02h): Supported 00:12:14.255 Identify (06h): Supported 00:12:14.255 Abort (08h): Supported 00:12:14.255 Set Features (09h): Supported 00:12:14.255 Get Features (0Ah): Supported 00:12:14.255 Asynchronous Event Request (0Ch): Supported 00:12:14.255 Keep Alive (18h): Supported 00:12:14.255 I/O Commands 00:12:14.255 ------------ 00:12:14.255 Flush (00h): Supported LBA-Change 00:12:14.255 Write (01h): Supported LBA-Change 00:12:14.255 Read (02h): Supported 00:12:14.255 Compare (05h): Supported 00:12:14.255 Write Zeroes (08h): Supported LBA-Change 00:12:14.255 Dataset Management (09h): Supported LBA-Change 00:12:14.255 Copy (19h): Supported LBA-Change 00:12:14.255 Unknown (79h): Supported LBA-Change 00:12:14.255 Unknown (7Ah): Supported 00:12:14.255 00:12:14.255 Error Log 00:12:14.255 ========= 00:12:14.255 00:12:14.255 Arbitration 00:12:14.255 =========== 00:12:14.255 Arbitration Burst: 1 00:12:14.255 00:12:14.255 Power Management 00:12:14.255 ================ 00:12:14.255 Number of Power States: 1 00:12:14.255 Current Power State: Power State #0 00:12:14.255 Power State #0: 00:12:14.255 Max Power: 0.00 W 00:12:14.255 Non-Operational State: Operational 00:12:14.255 Entry Latency: Not Reported 00:12:14.255 Exit Latency: Not Reported 00:12:14.255 Relative Read Throughput: 0 00:12:14.255 Relative Read Latency: 0 00:12:14.255 Relative Write Throughput: 0 00:12:14.255 Relative Write Latency: 0 00:12:14.255 Idle Power: Not Reported 00:12:14.255 Active Power: Not Reported 00:12:14.255 Non-Operational Permissive Mode: Not Supported 00:12:14.255 00:12:14.255 Health Information 00:12:14.255 ================== 00:12:14.255 Critical Warnings: 00:12:14.255 Available Spare Space: OK 00:12:14.255 Temperature: OK 00:12:14.255 Device Reliability: OK 00:12:14.255 Read Only: No 00:12:14.255 Volatile Memory Backup: OK 00:12:14.255 Current Temperature: 0 Kelvin (-2[2024-06-11 09:26:45.922433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:14.255 [2024-06-11 09:26:45.922445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:14.255 [2024-06-11 09:26:45.922470] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:14.255 [2024-06-11 09:26:45.922478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.255 [2024-06-11 09:26:45.922485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.255 [2024-06-11 09:26:45.922491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.255 [2024-06-11 09:26:45.922499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.255 [2024-06-11 09:26:45.922563] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:14.255 [2024-06-11 09:26:45.922573] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:14.255 [2024-06-11 09:26:45.923569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:14.255 [2024-06-11 09:26:45.923620] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:14.255 [2024-06-11 09:26:45.923626] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:14.255 [2024-06-11 09:26:45.924581] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:14.255 [2024-06-11 09:26:45.924593] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:14.255 [2024-06-11 09:26:45.924655] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:14.255 [2024-06-11 09:26:45.929324] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:14.255 73 Celsius) 00:12:14.255 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:14.255 Available Spare: 0% 00:12:14.255 Available Spare Threshold: 0% 00:12:14.255 Life Percentage Used: 0% 00:12:14.255 Data Units Read: 0 00:12:14.255 Data Units Written: 0 00:12:14.255 Host Read Commands: 0 00:12:14.255 Host Write Commands: 0 00:12:14.255 Controller Busy Time: 0 minutes 00:12:14.255 Power Cycles: 0 00:12:14.255 Power On Hours: 0 hours 00:12:14.255 Unsafe Shutdowns: 0 00:12:14.255 Unrecoverable Media Errors: 0 00:12:14.255 Lifetime Error Log Entries: 0 00:12:14.255 Warning Temperature Time: 0 minutes 00:12:14.255 Critical Temperature Time: 0 minutes 00:12:14.255 00:12:14.255 Number of Queues 00:12:14.255 ================ 00:12:14.255 Number of I/O Submission Queues: 127 00:12:14.255 Number of I/O Completion Queues: 127 00:12:14.255 00:12:14.255 Active Namespaces 00:12:14.255 ================= 00:12:14.255 Namespace ID:1 00:12:14.255 Error Recovery Timeout: Unlimited 00:12:14.255 Command Set Identifier: NVM (00h) 00:12:14.255 Deallocate: Supported 00:12:14.255 Deallocated/Unwritten Error: Not Supported 00:12:14.255 Deallocated Read Value: Unknown 00:12:14.255 Deallocate in Write Zeroes: Not Supported 00:12:14.255 Deallocated Guard Field: 0xFFFF 00:12:14.255 Flush: Supported 00:12:14.255 Reservation: Supported 00:12:14.255 Namespace Sharing Capabilities: Multiple Controllers 00:12:14.255 Size (in LBAs): 131072 (0GiB) 00:12:14.255 Capacity (in LBAs): 131072 (0GiB) 00:12:14.255 Utilization (in LBAs): 131072 (0GiB) 00:12:14.255 NGUID: F81701457B174B608DDD9AFC2B35222C 00:12:14.255 UUID: f8170145-7b17-4b60-8ddd-9afc2b35222c 00:12:14.255 Thin Provisioning: Not Supported 00:12:14.255 Per-NS Atomic Units: Yes 00:12:14.255 Atomic Boundary Size (Normal): 0 00:12:14.255 Atomic Boundary Size (PFail): 0 00:12:14.255 Atomic Boundary Offset: 0 00:12:14.255 Maximum Single Source Range Length: 65535 00:12:14.255 Maximum Copy Length: 65535 00:12:14.255 Maximum Source Range Count: 1 00:12:14.255 NGUID/EUI64 Never Reused: No 00:12:14.255 Namespace Write Protected: No 00:12:14.255 Number of LBA Formats: 1 00:12:14.255 Current LBA Format: LBA Format #00 00:12:14.255 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:14.255 00:12:14.255 09:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:14.255 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.517 [2024-06-11 09:26:46.131018] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:19.809 Initializing NVMe Controllers 00:12:19.809 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:19.809 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:19.809 Initialization complete. Launching workers. 00:12:19.809 ======================================================== 00:12:19.809 Latency(us) 00:12:19.809 Device Information : IOPS MiB/s Average min max 00:12:19.809 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34922.82 136.42 3664.25 1199.77 7833.04 00:12:19.809 ======================================================== 00:12:19.809 Total : 34922.82 136.42 3664.25 1199.77 7833.04 00:12:19.809 00:12:19.809 [2024-06-11 09:26:51.149564] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:19.809 09:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:19.809 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.809 [2024-06-11 09:26:51.355588] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:25.099 Initializing NVMe Controllers 00:12:25.099 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:25.099 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:25.099 Initialization complete. Launching workers. 00:12:25.099 ======================================================== 00:12:25.099 Latency(us) 00:12:25.099 Device Information : IOPS MiB/s Average min max 00:12:25.099 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16011.38 62.54 7999.59 7031.13 11971.98 00:12:25.099 ======================================================== 00:12:25.099 Total : 16011.38 62.54 7999.59 7031.13 11971.98 00:12:25.099 00:12:25.099 [2024-06-11 09:26:56.393027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:25.099 09:26:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:25.099 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.099 [2024-06-11 09:26:56.614063] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:30.388 [2024-06-11 09:27:01.699598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:30.388 Initializing NVMe Controllers 00:12:30.388 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:30.388 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:30.388 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:30.388 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:30.388 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:30.388 Initialization complete. Launching workers. 00:12:30.388 Starting thread on core 2 00:12:30.388 Starting thread on core 3 00:12:30.388 Starting thread on core 1 00:12:30.388 09:27:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:30.388 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.388 [2024-06-11 09:27:01.975697] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:33.688 [2024-06-11 09:27:05.039627] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:33.688 Initializing NVMe Controllers 00:12:33.688 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:33.688 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:33.688 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:33.688 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:33.688 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:33.688 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:33.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:33.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:33.688 Initialization complete. Launching workers. 00:12:33.688 Starting thread on core 1 with urgent priority queue 00:12:33.688 Starting thread on core 2 with urgent priority queue 00:12:33.688 Starting thread on core 3 with urgent priority queue 00:12:33.688 Starting thread on core 0 with urgent priority queue 00:12:33.688 SPDK bdev Controller (SPDK1 ) core 0: 15905.00 IO/s 6.29 secs/100000 ios 00:12:33.688 SPDK bdev Controller (SPDK1 ) core 1: 6535.67 IO/s 15.30 secs/100000 ios 00:12:33.688 SPDK bdev Controller (SPDK1 ) core 2: 11536.67 IO/s 8.67 secs/100000 ios 00:12:33.688 SPDK bdev Controller (SPDK1 ) core 3: 6522.33 IO/s 15.33 secs/100000 ios 00:12:33.688 ======================================================== 00:12:33.688 00:12:33.688 09:27:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:33.688 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.688 [2024-06-11 09:27:05.299891] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:33.688 Initializing NVMe Controllers 00:12:33.688 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:33.688 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:33.688 Namespace ID: 1 size: 0GB 00:12:33.688 Initialization complete. 00:12:33.688 INFO: using host memory buffer for IO 00:12:33.688 Hello world! 00:12:33.688 [2024-06-11 09:27:05.333121] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:33.688 09:27:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:33.688 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.947 [2024-06-11 09:27:05.595878] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:34.890 Initializing NVMe Controllers 00:12:34.890 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:34.890 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:34.890 Initialization complete. Launching workers. 00:12:34.890 submit (in ns) avg, min, max = 7894.9, 3965.0, 5995314.2 00:12:34.890 complete (in ns) avg, min, max = 18066.5, 2385.0, 3999990.8 00:12:34.890 00:12:34.890 Submit histogram 00:12:34.890 ================ 00:12:34.890 Range in us Cumulative Count 00:12:34.890 3.947 - 3.973: 0.2243% ( 44) 00:12:34.890 3.973 - 4.000: 3.4409% ( 631) 00:12:34.890 4.000 - 4.027: 11.7449% ( 1629) 00:12:34.890 4.027 - 4.053: 22.7507% ( 2159) 00:12:34.890 4.053 - 4.080: 32.9969% ( 2010) 00:12:34.890 4.080 - 4.107: 43.7886% ( 2117) 00:12:34.890 4.107 - 4.133: 58.5920% ( 2904) 00:12:34.890 4.133 - 4.160: 73.7626% ( 2976) 00:12:34.890 4.160 - 4.187: 86.2262% ( 2445) 00:12:34.890 4.187 - 4.213: 93.8319% ( 1492) 00:12:34.890 4.213 - 4.240: 97.3085% ( 682) 00:12:34.890 4.240 - 4.267: 98.6695% ( 267) 00:12:34.890 4.267 - 4.293: 99.2048% ( 105) 00:12:34.890 4.293 - 4.320: 99.3781% ( 34) 00:12:34.890 4.320 - 4.347: 99.4342% ( 11) 00:12:34.890 4.347 - 4.373: 99.4444% ( 2) 00:12:34.890 4.373 - 4.400: 99.4597% ( 3) 00:12:34.890 4.400 - 4.427: 99.4749% ( 3) 00:12:34.890 4.427 - 4.453: 99.4800% ( 1) 00:12:34.890 4.453 - 4.480: 99.4851% ( 1) 00:12:34.890 4.533 - 4.560: 99.4902% ( 1) 00:12:34.890 4.613 - 4.640: 99.5004% ( 2) 00:12:34.890 4.827 - 4.853: 99.5055% ( 1) 00:12:34.890 4.907 - 4.933: 99.5106% ( 1) 00:12:34.890 4.987 - 5.013: 99.5157% ( 1) 00:12:34.890 5.147 - 5.173: 99.5208% ( 1) 00:12:34.890 5.173 - 5.200: 99.5259% ( 1) 00:12:34.890 5.307 - 5.333: 99.5310% ( 1) 00:12:34.890 5.360 - 5.387: 99.5361% ( 1) 00:12:34.890 5.520 - 5.547: 99.5412% ( 1) 00:12:34.890 5.573 - 5.600: 99.5463% ( 1) 00:12:34.890 5.627 - 5.653: 99.5514% ( 1) 00:12:34.890 5.653 - 5.680: 99.5565% ( 1) 00:12:34.890 5.813 - 5.840: 99.5616% ( 1) 00:12:34.890 6.133 - 6.160: 99.5718% ( 2) 00:12:34.890 6.187 - 6.213: 99.5769% ( 1) 00:12:34.890 6.213 - 6.240: 99.5820% ( 1) 00:12:34.890 6.240 - 6.267: 99.5871% ( 1) 00:12:34.890 6.800 - 6.827: 99.5922% ( 1) 00:12:34.890 6.880 - 6.933: 99.5973% ( 1) 00:12:34.890 7.040 - 7.093: 99.6024% ( 1) 00:12:34.890 7.093 - 7.147: 99.6126% ( 2) 00:12:34.890 7.147 - 7.200: 99.6177% ( 1) 00:12:34.890 7.200 - 7.253: 99.6228% ( 1) 00:12:34.890 7.253 - 7.307: 99.6279% ( 1) 00:12:34.890 7.307 - 7.360: 99.6330% ( 1) 00:12:34.890 7.467 - 7.520: 99.6483% ( 3) 00:12:34.890 7.520 - 7.573: 99.6585% ( 2) 00:12:34.890 7.573 - 7.627: 99.6687% ( 2) 00:12:34.890 7.627 - 7.680: 99.6839% ( 3) 00:12:34.890 7.680 - 7.733: 99.6890% ( 1) 00:12:34.890 7.733 - 7.787: 99.6941% ( 1) 00:12:34.890 7.787 - 7.840: 99.6992% ( 1) 00:12:34.890 7.840 - 7.893: 99.7145% ( 3) 00:12:34.890 7.893 - 7.947: 99.7247% ( 2) 00:12:34.890 7.947 - 8.000: 99.7400% ( 3) 00:12:34.890 8.000 - 8.053: 99.7451% ( 1) 00:12:34.890 8.053 - 8.107: 99.7553% ( 2) 00:12:34.890 8.107 - 8.160: 99.7757% ( 4) 00:12:34.890 8.160 - 8.213: 99.7808% ( 1) 00:12:34.890 8.213 - 8.267: 99.7961% ( 3) 00:12:34.890 8.320 - 8.373: 99.8012% ( 1) 00:12:34.890 8.373 - 8.427: 99.8114% ( 2) 00:12:34.890 8.427 - 8.480: 99.8267% ( 3) 00:12:34.890 8.480 - 8.533: 99.8318% ( 1) 00:12:34.890 8.533 - 8.587: 99.8369% ( 1) 00:12:34.890 8.587 - 8.640: 99.8471% ( 2) 00:12:34.890 8.640 - 8.693: 99.8573% ( 2) 00:12:34.890 8.747 - 8.800: 99.8624% ( 1) 00:12:34.890 8.853 - 8.907: 99.8726% ( 2) 00:12:34.890 8.907 - 8.960: 99.8777% ( 1) 00:12:34.890 8.960 - 9.013: 99.8828% ( 1) 00:12:34.890 9.013 - 9.067: 99.8879% ( 1) 00:12:34.890 9.173 - 9.227: 99.8929% ( 1) 00:12:34.890 9.333 - 9.387: 99.8980% ( 1) 00:12:34.890 9.440 - 9.493: 99.9031% ( 1) 00:12:34.890 9.493 - 9.547: 99.9082% ( 1) 00:12:34.890 3986.773 - 4014.080: 99.9949% ( 17) 00:12:34.890 5980.160 - 6007.467: 100.0000% ( 1) 00:12:34.890 00:12:34.890 [2024-06-11 09:27:06.616262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:34.890 Complete histogram 00:12:34.890 ================== 00:12:34.890 Range in us Cumulative Count 00:12:34.890 2.373 - 2.387: 0.0051% ( 1) 00:12:34.890 2.387 - 2.400: 0.1631% ( 31) 00:12:34.890 2.400 - 2.413: 1.2183% ( 207) 00:12:34.890 2.413 - 2.427: 1.3305% ( 22) 00:12:34.890 2.427 - 2.440: 1.5140% ( 36) 00:12:34.890 2.440 - 2.453: 1.5599% ( 9) 00:12:34.890 2.453 - 2.467: 3.7060% ( 421) 00:12:34.890 2.467 - 2.480: 49.3246% ( 8949) 00:12:34.890 2.480 - 2.493: 61.4722% ( 2383) 00:12:34.890 2.493 - 2.507: 71.5451% ( 1976) 00:12:34.890 2.507 - 2.520: 77.9783% ( 1262) 00:12:34.890 2.520 - 2.533: 80.9043% ( 574) 00:12:34.890 2.533 - 2.547: 85.6502% ( 931) 00:12:34.890 2.547 - 2.560: 92.1344% ( 1272) 00:12:34.890 2.560 - 2.573: 95.6211% ( 684) 00:12:34.890 2.573 - 2.587: 97.5073% ( 370) 00:12:34.890 2.587 - 2.600: 98.6899% ( 232) 00:12:34.890 2.600 - 2.613: 99.1895% ( 98) 00:12:34.890 2.613 - 2.627: 99.2965% ( 21) 00:12:34.890 2.627 - 2.640: 99.3118% ( 3) 00:12:34.890 2.653 - 2.667: 99.3169% ( 1) 00:12:34.890 3.187 - 3.200: 99.3220% ( 1) 00:12:34.890 5.227 - 5.253: 99.3271% ( 1) 00:12:34.890 5.253 - 5.280: 99.3322% ( 1) 00:12:34.890 5.280 - 5.307: 99.3373% ( 1) 00:12:34.890 5.387 - 5.413: 99.3475% ( 2) 00:12:34.890 5.413 - 5.440: 99.3526% ( 1) 00:12:34.890 5.467 - 5.493: 99.3577% ( 1) 00:12:34.890 5.493 - 5.520: 99.3730% ( 3) 00:12:34.890 5.547 - 5.573: 99.3781% ( 1) 00:12:34.890 5.573 - 5.600: 99.3832% ( 1) 00:12:34.890 5.627 - 5.653: 99.3934% ( 2) 00:12:34.890 5.787 - 5.813: 99.3985% ( 1) 00:12:34.890 5.893 - 5.920: 99.4036% ( 1) 00:12:34.890 5.920 - 5.947: 99.4138% ( 2) 00:12:34.890 6.000 - 6.027: 99.4189% ( 1) 00:12:34.890 6.027 - 6.053: 99.4342% ( 3) 00:12:34.890 6.053 - 6.080: 99.4393% ( 1) 00:12:34.890 6.160 - 6.187: 99.4444% ( 1) 00:12:34.890 6.187 - 6.213: 99.4546% ( 2) 00:12:34.890 6.213 - 6.240: 99.4597% ( 1) 00:12:34.890 6.267 - 6.293: 99.4698% ( 2) 00:12:34.890 6.293 - 6.320: 99.4749% ( 1) 00:12:34.890 6.400 - 6.427: 99.4800% ( 1) 00:12:34.890 6.427 - 6.453: 99.4851% ( 1) 00:12:34.890 6.453 - 6.480: 99.4902% ( 1) 00:12:34.890 6.560 - 6.587: 99.4953% ( 1) 00:12:34.890 6.587 - 6.613: 99.5004% ( 1) 00:12:34.890 6.640 - 6.667: 99.5055% ( 1) 00:12:34.890 6.693 - 6.720: 99.5157% ( 2) 00:12:34.890 6.933 - 6.987: 99.5208% ( 1) 00:12:34.890 6.987 - 7.040: 99.5310% ( 2) 00:12:34.890 7.040 - 7.093: 99.5361% ( 1) 00:12:34.890 7.147 - 7.200: 99.5463% ( 2) 00:12:34.890 7.253 - 7.307: 99.5514% ( 1) 00:12:34.890 7.307 - 7.360: 99.5565% ( 1) 00:12:34.890 7.360 - 7.413: 99.5616% ( 1) 00:12:34.890 7.680 - 7.733: 99.5667% ( 1) 00:12:34.890 7.787 - 7.840: 99.5718% ( 1) 00:12:34.890 8.000 - 8.053: 99.5769% ( 1) 00:12:34.890 8.373 - 8.427: 99.5820% ( 1) 00:12:34.890 8.533 - 8.587: 99.5871% ( 1) 00:12:34.890 13.387 - 13.440: 99.5922% ( 1) 00:12:34.890 13.493 - 13.547: 99.5973% ( 1) 00:12:34.890 13.760 - 13.867: 99.6024% ( 1) 00:12:34.890 15.467 - 15.573: 99.6075% ( 1) 00:12:34.890 1536.000 - 1542.827: 99.6126% ( 1) 00:12:34.891 3986.773 - 4014.080: 100.0000% ( 76) 00:12:34.891 00:12:34.891 09:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:34.891 09:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:34.891 09:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:34.891 09:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:34.891 09:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:35.151 [ 00:12:35.151 { 00:12:35.151 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:35.151 "subtype": "Discovery", 00:12:35.151 "listen_addresses": [], 00:12:35.151 "allow_any_host": true, 00:12:35.151 "hosts": [] 00:12:35.151 }, 00:12:35.151 { 00:12:35.151 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:35.151 "subtype": "NVMe", 00:12:35.151 "listen_addresses": [ 00:12:35.151 { 00:12:35.151 "trtype": "VFIOUSER", 00:12:35.151 "adrfam": "IPv4", 00:12:35.151 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:35.151 "trsvcid": "0" 00:12:35.151 } 00:12:35.151 ], 00:12:35.151 "allow_any_host": true, 00:12:35.151 "hosts": [], 00:12:35.151 "serial_number": "SPDK1", 00:12:35.151 "model_number": "SPDK bdev Controller", 00:12:35.151 "max_namespaces": 32, 00:12:35.151 "min_cntlid": 1, 00:12:35.151 "max_cntlid": 65519, 00:12:35.151 "namespaces": [ 00:12:35.151 { 00:12:35.151 "nsid": 1, 00:12:35.151 "bdev_name": "Malloc1", 00:12:35.151 "name": "Malloc1", 00:12:35.151 "nguid": "F81701457B174B608DDD9AFC2B35222C", 00:12:35.151 "uuid": "f8170145-7b17-4b60-8ddd-9afc2b35222c" 00:12:35.151 } 00:12:35.151 ] 00:12:35.151 }, 00:12:35.151 { 00:12:35.151 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:35.151 "subtype": "NVMe", 00:12:35.151 "listen_addresses": [ 00:12:35.151 { 00:12:35.151 "trtype": "VFIOUSER", 00:12:35.151 "adrfam": "IPv4", 00:12:35.151 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:35.151 "trsvcid": "0" 00:12:35.151 } 00:12:35.151 ], 00:12:35.151 "allow_any_host": true, 00:12:35.151 "hosts": [], 00:12:35.151 "serial_number": "SPDK2", 00:12:35.151 "model_number": "SPDK bdev Controller", 00:12:35.151 "max_namespaces": 32, 00:12:35.151 "min_cntlid": 1, 00:12:35.151 "max_cntlid": 65519, 00:12:35.151 "namespaces": [ 00:12:35.151 { 00:12:35.151 "nsid": 1, 00:12:35.151 "bdev_name": "Malloc2", 00:12:35.151 "name": "Malloc2", 00:12:35.151 "nguid": "80B56843EDB74BD4B6AFD0227202EDC3", 00:12:35.151 "uuid": "80b56843-edb7-4bd4-b6af-d0227202edc3" 00:12:35.151 } 00:12:35.151 ] 00:12:35.151 } 00:12:35.151 ] 00:12:35.151 09:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:35.151 09:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1018372 00:12:35.151 09:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:35.151 09:27:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:12:35.151 09:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:35.151 09:27:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:35.151 09:27:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:35.151 09:27:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:12:35.151 09:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:35.151 09:27:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:35.151 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.411 [2024-06-11 09:27:07.054772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:35.411 Malloc3 00:12:35.411 09:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:35.672 [2024-06-11 09:27:07.312948] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:35.672 09:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:35.672 Asynchronous Event Request test 00:12:35.672 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:35.672 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:35.672 Registering asynchronous event callbacks... 00:12:35.672 Starting namespace attribute notice tests for all controllers... 00:12:35.672 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:35.672 aer_cb - Changed Namespace 00:12:35.672 Cleaning up... 00:12:35.934 [ 00:12:35.934 { 00:12:35.934 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:35.934 "subtype": "Discovery", 00:12:35.934 "listen_addresses": [], 00:12:35.934 "allow_any_host": true, 00:12:35.934 "hosts": [] 00:12:35.934 }, 00:12:35.934 { 00:12:35.934 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:35.934 "subtype": "NVMe", 00:12:35.934 "listen_addresses": [ 00:12:35.934 { 00:12:35.934 "trtype": "VFIOUSER", 00:12:35.934 "adrfam": "IPv4", 00:12:35.934 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:35.934 "trsvcid": "0" 00:12:35.934 } 00:12:35.934 ], 00:12:35.934 "allow_any_host": true, 00:12:35.934 "hosts": [], 00:12:35.934 "serial_number": "SPDK1", 00:12:35.934 "model_number": "SPDK bdev Controller", 00:12:35.934 "max_namespaces": 32, 00:12:35.934 "min_cntlid": 1, 00:12:35.934 "max_cntlid": 65519, 00:12:35.934 "namespaces": [ 00:12:35.934 { 00:12:35.934 "nsid": 1, 00:12:35.934 "bdev_name": "Malloc1", 00:12:35.934 "name": "Malloc1", 00:12:35.934 "nguid": "F81701457B174B608DDD9AFC2B35222C", 00:12:35.934 "uuid": "f8170145-7b17-4b60-8ddd-9afc2b35222c" 00:12:35.934 }, 00:12:35.934 { 00:12:35.934 "nsid": 2, 00:12:35.934 "bdev_name": "Malloc3", 00:12:35.934 "name": "Malloc3", 00:12:35.934 "nguid": "C9A9748ED7604F969F216407D37BC40E", 00:12:35.934 "uuid": "c9a9748e-d760-4f96-9f21-6407d37bc40e" 00:12:35.934 } 00:12:35.934 ] 00:12:35.934 }, 00:12:35.934 { 00:12:35.934 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:35.934 "subtype": "NVMe", 00:12:35.934 "listen_addresses": [ 00:12:35.934 { 00:12:35.934 "trtype": "VFIOUSER", 00:12:35.934 "adrfam": "IPv4", 00:12:35.934 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:35.934 "trsvcid": "0" 00:12:35.934 } 00:12:35.934 ], 00:12:35.934 "allow_any_host": true, 00:12:35.934 "hosts": [], 00:12:35.934 "serial_number": "SPDK2", 00:12:35.934 "model_number": "SPDK bdev Controller", 00:12:35.934 "max_namespaces": 32, 00:12:35.934 "min_cntlid": 1, 00:12:35.934 "max_cntlid": 65519, 00:12:35.934 "namespaces": [ 00:12:35.934 { 00:12:35.934 "nsid": 1, 00:12:35.934 "bdev_name": "Malloc2", 00:12:35.934 "name": "Malloc2", 00:12:35.934 "nguid": "80B56843EDB74BD4B6AFD0227202EDC3", 00:12:35.934 "uuid": "80b56843-edb7-4bd4-b6af-d0227202edc3" 00:12:35.934 } 00:12:35.934 ] 00:12:35.934 } 00:12:35.934 ] 00:12:35.934 09:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1018372 00:12:35.934 09:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:35.934 09:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:35.934 09:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:35.934 09:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:35.934 [2024-06-11 09:27:07.582455] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:12:35.934 [2024-06-11 09:27:07.582501] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018559 ] 00:12:35.934 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.934 [2024-06-11 09:27:07.614892] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:35.934 [2024-06-11 09:27:07.623871] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:35.934 [2024-06-11 09:27:07.623892] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f308619a000 00:12:35.934 [2024-06-11 09:27:07.624874] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.934 [2024-06-11 09:27:07.625882] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.934 [2024-06-11 09:27:07.626891] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.934 [2024-06-11 09:27:07.627902] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:35.934 [2024-06-11 09:27:07.628908] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:35.934 [2024-06-11 09:27:07.629915] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.934 [2024-06-11 09:27:07.630928] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:35.934 [2024-06-11 09:27:07.631929] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.934 [2024-06-11 09:27:07.632935] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:35.934 [2024-06-11 09:27:07.632948] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f308618f000 00:12:35.934 [2024-06-11 09:27:07.634275] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:35.934 [2024-06-11 09:27:07.654478] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:35.934 [2024-06-11 09:27:07.654500] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:35.934 [2024-06-11 09:27:07.656565] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:35.934 [2024-06-11 09:27:07.656610] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:35.934 [2024-06-11 09:27:07.656698] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:35.935 [2024-06-11 09:27:07.656714] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:35.935 [2024-06-11 09:27:07.656719] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:35.935 [2024-06-11 09:27:07.657571] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:35.935 [2024-06-11 09:27:07.657581] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:35.935 [2024-06-11 09:27:07.657588] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:35.935 [2024-06-11 09:27:07.658580] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:35.935 [2024-06-11 09:27:07.658590] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:35.935 [2024-06-11 09:27:07.658597] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:35.935 [2024-06-11 09:27:07.659582] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:35.935 [2024-06-11 09:27:07.659590] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:35.935 [2024-06-11 09:27:07.660587] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:35.935 [2024-06-11 09:27:07.660595] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:35.935 [2024-06-11 09:27:07.660600] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:35.935 [2024-06-11 09:27:07.660606] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:35.935 [2024-06-11 09:27:07.660712] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:35.935 [2024-06-11 09:27:07.660716] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:35.935 [2024-06-11 09:27:07.660721] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:35.935 [2024-06-11 09:27:07.661594] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:35.935 [2024-06-11 09:27:07.662595] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:35.935 [2024-06-11 09:27:07.663604] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:35.935 [2024-06-11 09:27:07.664605] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:35.935 [2024-06-11 09:27:07.664645] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:35.935 [2024-06-11 09:27:07.665623] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:35.935 [2024-06-11 09:27:07.665632] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:35.935 [2024-06-11 09:27:07.665640] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.665661] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:35.935 [2024-06-11 09:27:07.665668] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.665681] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:35.935 [2024-06-11 09:27:07.665686] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:35.935 [2024-06-11 09:27:07.665698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:35.935 [2024-06-11 09:27:07.676322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:35.935 [2024-06-11 09:27:07.676333] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:35.935 [2024-06-11 09:27:07.676338] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:35.935 [2024-06-11 09:27:07.676342] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:35.935 [2024-06-11 09:27:07.676349] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:35.935 [2024-06-11 09:27:07.676353] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:35.935 [2024-06-11 09:27:07.676358] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:35.935 [2024-06-11 09:27:07.676363] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.676370] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.676380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:35.935 [2024-06-11 09:27:07.684322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:35.935 [2024-06-11 09:27:07.684335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.935 [2024-06-11 09:27:07.684344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.935 [2024-06-11 09:27:07.684352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.935 [2024-06-11 09:27:07.684360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.935 [2024-06-11 09:27:07.684364] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.684373] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.684382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:35.935 [2024-06-11 09:27:07.692329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:35.935 [2024-06-11 09:27:07.692339] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:35.935 [2024-06-11 09:27:07.692344] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.692351] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.692356] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.692365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:35.935 [2024-06-11 09:27:07.700319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:35.935 [2024-06-11 09:27:07.700371] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.700380] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.700387] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:35.935 [2024-06-11 09:27:07.700392] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:35.935 [2024-06-11 09:27:07.700398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:35.935 [2024-06-11 09:27:07.708322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:35.935 [2024-06-11 09:27:07.708334] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:35.935 [2024-06-11 09:27:07.708347] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.708355] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.708361] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:35.935 [2024-06-11 09:27:07.708366] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:35.935 [2024-06-11 09:27:07.708372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:35.935 [2024-06-11 09:27:07.716324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:35.935 [2024-06-11 09:27:07.716337] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.716344] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.716352] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:35.935 [2024-06-11 09:27:07.716356] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:35.935 [2024-06-11 09:27:07.716362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:35.935 [2024-06-11 09:27:07.724322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:35.935 [2024-06-11 09:27:07.724332] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.724343] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.724351] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:35.935 [2024-06-11 09:27:07.724356] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:35.936 [2024-06-11 09:27:07.724361] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:35.936 [2024-06-11 09:27:07.724366] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:35.936 [2024-06-11 09:27:07.724370] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:35.936 [2024-06-11 09:27:07.724375] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:35.936 [2024-06-11 09:27:07.724394] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:35.936 [2024-06-11 09:27:07.732323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:35.936 [2024-06-11 09:27:07.732336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:35.936 [2024-06-11 09:27:07.737704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:35.936 [2024-06-11 09:27:07.737719] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:35.936 [2024-06-11 09:27:07.747324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:35.936 [2024-06-11 09:27:07.747338] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:36.197 [2024-06-11 09:27:07.755323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:36.197 [2024-06-11 09:27:07.755337] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:36.197 [2024-06-11 09:27:07.755342] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:36.197 [2024-06-11 09:27:07.755345] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:36.197 [2024-06-11 09:27:07.755349] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:36.197 [2024-06-11 09:27:07.755355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:36.197 [2024-06-11 09:27:07.755363] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:36.197 [2024-06-11 09:27:07.755367] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:36.197 [2024-06-11 09:27:07.755373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:36.197 [2024-06-11 09:27:07.755380] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:36.197 [2024-06-11 09:27:07.755384] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:36.197 [2024-06-11 09:27:07.755390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:36.197 [2024-06-11 09:27:07.755400] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:36.197 [2024-06-11 09:27:07.755404] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:36.197 [2024-06-11 09:27:07.755410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:36.197 [2024-06-11 09:27:07.763321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:36.197 [2024-06-11 09:27:07.763336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:36.197 [2024-06-11 09:27:07.763345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:36.197 [2024-06-11 09:27:07.763354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:36.197 ===================================================== 00:12:36.197 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:36.197 ===================================================== 00:12:36.197 Controller Capabilities/Features 00:12:36.197 ================================ 00:12:36.197 Vendor ID: 4e58 00:12:36.197 Subsystem Vendor ID: 4e58 00:12:36.197 Serial Number: SPDK2 00:12:36.197 Model Number: SPDK bdev Controller 00:12:36.197 Firmware Version: 24.09 00:12:36.197 Recommended Arb Burst: 6 00:12:36.197 IEEE OUI Identifier: 8d 6b 50 00:12:36.197 Multi-path I/O 00:12:36.198 May have multiple subsystem ports: Yes 00:12:36.198 May have multiple controllers: Yes 00:12:36.198 Associated with SR-IOV VF: No 00:12:36.198 Max Data Transfer Size: 131072 00:12:36.198 Max Number of Namespaces: 32 00:12:36.198 Max Number of I/O Queues: 127 00:12:36.198 NVMe Specification Version (VS): 1.3 00:12:36.198 NVMe Specification Version (Identify): 1.3 00:12:36.198 Maximum Queue Entries: 256 00:12:36.198 Contiguous Queues Required: Yes 00:12:36.198 Arbitration Mechanisms Supported 00:12:36.198 Weighted Round Robin: Not Supported 00:12:36.198 Vendor Specific: Not Supported 00:12:36.198 Reset Timeout: 15000 ms 00:12:36.198 Doorbell Stride: 4 bytes 00:12:36.198 NVM Subsystem Reset: Not Supported 00:12:36.198 Command Sets Supported 00:12:36.198 NVM Command Set: Supported 00:12:36.198 Boot Partition: Not Supported 00:12:36.198 Memory Page Size Minimum: 4096 bytes 00:12:36.198 Memory Page Size Maximum: 4096 bytes 00:12:36.198 Persistent Memory Region: Not Supported 00:12:36.198 Optional Asynchronous Events Supported 00:12:36.198 Namespace Attribute Notices: Supported 00:12:36.198 Firmware Activation Notices: Not Supported 00:12:36.198 ANA Change Notices: Not Supported 00:12:36.198 PLE Aggregate Log Change Notices: Not Supported 00:12:36.198 LBA Status Info Alert Notices: Not Supported 00:12:36.198 EGE Aggregate Log Change Notices: Not Supported 00:12:36.198 Normal NVM Subsystem Shutdown event: Not Supported 00:12:36.198 Zone Descriptor Change Notices: Not Supported 00:12:36.198 Discovery Log Change Notices: Not Supported 00:12:36.198 Controller Attributes 00:12:36.198 128-bit Host Identifier: Supported 00:12:36.198 Non-Operational Permissive Mode: Not Supported 00:12:36.198 NVM Sets: Not Supported 00:12:36.198 Read Recovery Levels: Not Supported 00:12:36.198 Endurance Groups: Not Supported 00:12:36.198 Predictable Latency Mode: Not Supported 00:12:36.198 Traffic Based Keep ALive: Not Supported 00:12:36.198 Namespace Granularity: Not Supported 00:12:36.198 SQ Associations: Not Supported 00:12:36.198 UUID List: Not Supported 00:12:36.198 Multi-Domain Subsystem: Not Supported 00:12:36.198 Fixed Capacity Management: Not Supported 00:12:36.198 Variable Capacity Management: Not Supported 00:12:36.198 Delete Endurance Group: Not Supported 00:12:36.198 Delete NVM Set: Not Supported 00:12:36.198 Extended LBA Formats Supported: Not Supported 00:12:36.198 Flexible Data Placement Supported: Not Supported 00:12:36.198 00:12:36.198 Controller Memory Buffer Support 00:12:36.198 ================================ 00:12:36.198 Supported: No 00:12:36.198 00:12:36.198 Persistent Memory Region Support 00:12:36.198 ================================ 00:12:36.198 Supported: No 00:12:36.198 00:12:36.198 Admin Command Set Attributes 00:12:36.198 ============================ 00:12:36.198 Security Send/Receive: Not Supported 00:12:36.198 Format NVM: Not Supported 00:12:36.198 Firmware Activate/Download: Not Supported 00:12:36.198 Namespace Management: Not Supported 00:12:36.198 Device Self-Test: Not Supported 00:12:36.198 Directives: Not Supported 00:12:36.198 NVMe-MI: Not Supported 00:12:36.198 Virtualization Management: Not Supported 00:12:36.198 Doorbell Buffer Config: Not Supported 00:12:36.198 Get LBA Status Capability: Not Supported 00:12:36.198 Command & Feature Lockdown Capability: Not Supported 00:12:36.198 Abort Command Limit: 4 00:12:36.198 Async Event Request Limit: 4 00:12:36.198 Number of Firmware Slots: N/A 00:12:36.198 Firmware Slot 1 Read-Only: N/A 00:12:36.198 Firmware Activation Without Reset: N/A 00:12:36.198 Multiple Update Detection Support: N/A 00:12:36.198 Firmware Update Granularity: No Information Provided 00:12:36.198 Per-Namespace SMART Log: No 00:12:36.198 Asymmetric Namespace Access Log Page: Not Supported 00:12:36.198 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:36.198 Command Effects Log Page: Supported 00:12:36.198 Get Log Page Extended Data: Supported 00:12:36.198 Telemetry Log Pages: Not Supported 00:12:36.198 Persistent Event Log Pages: Not Supported 00:12:36.198 Supported Log Pages Log Page: May Support 00:12:36.198 Commands Supported & Effects Log Page: Not Supported 00:12:36.198 Feature Identifiers & Effects Log Page:May Support 00:12:36.198 NVMe-MI Commands & Effects Log Page: May Support 00:12:36.198 Data Area 4 for Telemetry Log: Not Supported 00:12:36.198 Error Log Page Entries Supported: 128 00:12:36.198 Keep Alive: Supported 00:12:36.198 Keep Alive Granularity: 10000 ms 00:12:36.198 00:12:36.198 NVM Command Set Attributes 00:12:36.198 ========================== 00:12:36.198 Submission Queue Entry Size 00:12:36.198 Max: 64 00:12:36.198 Min: 64 00:12:36.198 Completion Queue Entry Size 00:12:36.198 Max: 16 00:12:36.198 Min: 16 00:12:36.198 Number of Namespaces: 32 00:12:36.198 Compare Command: Supported 00:12:36.198 Write Uncorrectable Command: Not Supported 00:12:36.198 Dataset Management Command: Supported 00:12:36.198 Write Zeroes Command: Supported 00:12:36.198 Set Features Save Field: Not Supported 00:12:36.198 Reservations: Not Supported 00:12:36.198 Timestamp: Not Supported 00:12:36.198 Copy: Supported 00:12:36.198 Volatile Write Cache: Present 00:12:36.198 Atomic Write Unit (Normal): 1 00:12:36.198 Atomic Write Unit (PFail): 1 00:12:36.198 Atomic Compare & Write Unit: 1 00:12:36.198 Fused Compare & Write: Supported 00:12:36.198 Scatter-Gather List 00:12:36.198 SGL Command Set: Supported (Dword aligned) 00:12:36.198 SGL Keyed: Not Supported 00:12:36.198 SGL Bit Bucket Descriptor: Not Supported 00:12:36.198 SGL Metadata Pointer: Not Supported 00:12:36.198 Oversized SGL: Not Supported 00:12:36.198 SGL Metadata Address: Not Supported 00:12:36.198 SGL Offset: Not Supported 00:12:36.198 Transport SGL Data Block: Not Supported 00:12:36.198 Replay Protected Memory Block: Not Supported 00:12:36.198 00:12:36.198 Firmware Slot Information 00:12:36.198 ========================= 00:12:36.198 Active slot: 1 00:12:36.198 Slot 1 Firmware Revision: 24.09 00:12:36.198 00:12:36.198 00:12:36.198 Commands Supported and Effects 00:12:36.198 ============================== 00:12:36.198 Admin Commands 00:12:36.198 -------------- 00:12:36.198 Get Log Page (02h): Supported 00:12:36.198 Identify (06h): Supported 00:12:36.198 Abort (08h): Supported 00:12:36.198 Set Features (09h): Supported 00:12:36.198 Get Features (0Ah): Supported 00:12:36.198 Asynchronous Event Request (0Ch): Supported 00:12:36.198 Keep Alive (18h): Supported 00:12:36.198 I/O Commands 00:12:36.198 ------------ 00:12:36.198 Flush (00h): Supported LBA-Change 00:12:36.198 Write (01h): Supported LBA-Change 00:12:36.198 Read (02h): Supported 00:12:36.198 Compare (05h): Supported 00:12:36.198 Write Zeroes (08h): Supported LBA-Change 00:12:36.198 Dataset Management (09h): Supported LBA-Change 00:12:36.198 Copy (19h): Supported LBA-Change 00:12:36.198 Unknown (79h): Supported LBA-Change 00:12:36.198 Unknown (7Ah): Supported 00:12:36.198 00:12:36.198 Error Log 00:12:36.198 ========= 00:12:36.198 00:12:36.198 Arbitration 00:12:36.198 =========== 00:12:36.198 Arbitration Burst: 1 00:12:36.198 00:12:36.198 Power Management 00:12:36.198 ================ 00:12:36.198 Number of Power States: 1 00:12:36.198 Current Power State: Power State #0 00:12:36.198 Power State #0: 00:12:36.198 Max Power: 0.00 W 00:12:36.198 Non-Operational State: Operational 00:12:36.198 Entry Latency: Not Reported 00:12:36.198 Exit Latency: Not Reported 00:12:36.198 Relative Read Throughput: 0 00:12:36.198 Relative Read Latency: 0 00:12:36.198 Relative Write Throughput: 0 00:12:36.198 Relative Write Latency: 0 00:12:36.198 Idle Power: Not Reported 00:12:36.198 Active Power: Not Reported 00:12:36.198 Non-Operational Permissive Mode: Not Supported 00:12:36.198 00:12:36.198 Health Information 00:12:36.198 ================== 00:12:36.198 Critical Warnings: 00:12:36.198 Available Spare Space: OK 00:12:36.198 Temperature: OK 00:12:36.198 Device Reliability: OK 00:12:36.198 Read Only: No 00:12:36.198 Volatile Memory Backup: OK 00:12:36.198 Current Temperature: 0 Kelvin (-2[2024-06-11 09:27:07.763456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:36.198 [2024-06-11 09:27:07.771320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:36.198 [2024-06-11 09:27:07.771348] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:36.198 [2024-06-11 09:27:07.771356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.199 [2024-06-11 09:27:07.771363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.199 [2024-06-11 09:27:07.771369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.199 [2024-06-11 09:27:07.771375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:36.199 [2024-06-11 09:27:07.771414] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:36.199 [2024-06-11 09:27:07.771424] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:36.199 [2024-06-11 09:27:07.772428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:36.199 [2024-06-11 09:27:07.772476] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:36.199 [2024-06-11 09:27:07.772483] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:36.199 [2024-06-11 09:27:07.773439] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:36.199 [2024-06-11 09:27:07.773450] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:36.199 [2024-06-11 09:27:07.773502] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:36.199 [2024-06-11 09:27:07.774882] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:36.199 73 Celsius) 00:12:36.199 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:36.199 Available Spare: 0% 00:12:36.199 Available Spare Threshold: 0% 00:12:36.199 Life Percentage Used: 0% 00:12:36.199 Data Units Read: 0 00:12:36.199 Data Units Written: 0 00:12:36.199 Host Read Commands: 0 00:12:36.199 Host Write Commands: 0 00:12:36.199 Controller Busy Time: 0 minutes 00:12:36.199 Power Cycles: 0 00:12:36.199 Power On Hours: 0 hours 00:12:36.199 Unsafe Shutdowns: 0 00:12:36.199 Unrecoverable Media Errors: 0 00:12:36.199 Lifetime Error Log Entries: 0 00:12:36.199 Warning Temperature Time: 0 minutes 00:12:36.199 Critical Temperature Time: 0 minutes 00:12:36.199 00:12:36.199 Number of Queues 00:12:36.199 ================ 00:12:36.199 Number of I/O Submission Queues: 127 00:12:36.199 Number of I/O Completion Queues: 127 00:12:36.199 00:12:36.199 Active Namespaces 00:12:36.199 ================= 00:12:36.199 Namespace ID:1 00:12:36.199 Error Recovery Timeout: Unlimited 00:12:36.199 Command Set Identifier: NVM (00h) 00:12:36.199 Deallocate: Supported 00:12:36.199 Deallocated/Unwritten Error: Not Supported 00:12:36.199 Deallocated Read Value: Unknown 00:12:36.199 Deallocate in Write Zeroes: Not Supported 00:12:36.199 Deallocated Guard Field: 0xFFFF 00:12:36.199 Flush: Supported 00:12:36.199 Reservation: Supported 00:12:36.199 Namespace Sharing Capabilities: Multiple Controllers 00:12:36.199 Size (in LBAs): 131072 (0GiB) 00:12:36.199 Capacity (in LBAs): 131072 (0GiB) 00:12:36.199 Utilization (in LBAs): 131072 (0GiB) 00:12:36.199 NGUID: 80B56843EDB74BD4B6AFD0227202EDC3 00:12:36.199 UUID: 80b56843-edb7-4bd4-b6af-d0227202edc3 00:12:36.199 Thin Provisioning: Not Supported 00:12:36.199 Per-NS Atomic Units: Yes 00:12:36.199 Atomic Boundary Size (Normal): 0 00:12:36.199 Atomic Boundary Size (PFail): 0 00:12:36.199 Atomic Boundary Offset: 0 00:12:36.199 Maximum Single Source Range Length: 65535 00:12:36.199 Maximum Copy Length: 65535 00:12:36.199 Maximum Source Range Count: 1 00:12:36.199 NGUID/EUI64 Never Reused: No 00:12:36.199 Namespace Write Protected: No 00:12:36.199 Number of LBA Formats: 1 00:12:36.199 Current LBA Format: LBA Format #00 00:12:36.199 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:36.199 00:12:36.199 09:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:36.199 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.199 [2024-06-11 09:27:07.976589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:41.520 Initializing NVMe Controllers 00:12:41.520 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:41.520 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:41.520 Initialization complete. Launching workers. 00:12:41.520 ======================================================== 00:12:41.520 Latency(us) 00:12:41.520 Device Information : IOPS MiB/s Average min max 00:12:41.520 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 44091.69 172.23 2902.39 902.64 6545.91 00:12:41.520 ======================================================== 00:12:41.520 Total : 44091.69 172.23 2902.39 902.64 6545.91 00:12:41.520 00:12:41.520 [2024-06-11 09:27:13.081531] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:41.520 09:27:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:41.521 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.521 [2024-06-11 09:27:13.286218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:46.810 Initializing NVMe Controllers 00:12:46.810 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:46.810 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:46.810 Initialization complete. Launching workers. 00:12:46.810 ======================================================== 00:12:46.810 Latency(us) 00:12:46.810 Device Information : IOPS MiB/s Average min max 00:12:46.810 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34403.72 134.39 3720.55 1203.58 10677.98 00:12:46.810 ======================================================== 00:12:46.810 Total : 34403.72 134.39 3720.55 1203.58 10677.98 00:12:46.810 00:12:46.810 [2024-06-11 09:27:18.308122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:46.810 09:27:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:46.810 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.810 [2024-06-11 09:27:18.535612] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:52.100 [2024-06-11 09:27:23.680417] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:52.100 Initializing NVMe Controllers 00:12:52.100 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:52.100 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:52.100 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:52.100 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:52.100 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:52.100 Initialization complete. Launching workers. 00:12:52.100 Starting thread on core 2 00:12:52.100 Starting thread on core 3 00:12:52.100 Starting thread on core 1 00:12:52.100 09:27:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:52.100 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.361 [2024-06-11 09:27:23.946645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:55.665 [2024-06-11 09:27:27.002797] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:55.665 Initializing NVMe Controllers 00:12:55.665 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.665 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.665 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:55.665 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:55.665 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:55.665 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:55.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:55.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:55.665 Initialization complete. Launching workers. 00:12:55.665 Starting thread on core 1 with urgent priority queue 00:12:55.665 Starting thread on core 2 with urgent priority queue 00:12:55.665 Starting thread on core 3 with urgent priority queue 00:12:55.665 Starting thread on core 0 with urgent priority queue 00:12:55.665 SPDK bdev Controller (SPDK2 ) core 0: 4415.33 IO/s 22.65 secs/100000 ios 00:12:55.665 SPDK bdev Controller (SPDK2 ) core 1: 5651.00 IO/s 17.70 secs/100000 ios 00:12:55.665 SPDK bdev Controller (SPDK2 ) core 2: 5061.67 IO/s 19.76 secs/100000 ios 00:12:55.665 SPDK bdev Controller (SPDK2 ) core 3: 5794.33 IO/s 17.26 secs/100000 ios 00:12:55.665 ======================================================== 00:12:55.665 00:12:55.665 09:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:55.665 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.665 [2024-06-11 09:27:27.266735] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:55.665 Initializing NVMe Controllers 00:12:55.665 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.665 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.665 Namespace ID: 1 size: 0GB 00:12:55.665 Initialization complete. 00:12:55.665 INFO: using host memory buffer for IO 00:12:55.665 Hello world! 00:12:55.665 [2024-06-11 09:27:27.276800] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:55.665 09:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:55.665 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.925 [2024-06-11 09:27:27.527611] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:56.866 Initializing NVMe Controllers 00:12:56.866 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:56.866 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:56.866 Initialization complete. Launching workers. 00:12:56.866 submit (in ns) avg, min, max = 8841.3, 3920.0, 4001220.0 00:12:56.866 complete (in ns) avg, min, max = 23718.1, 2372.5, 4997179.2 00:12:56.866 00:12:56.866 Submit histogram 00:12:56.866 ================ 00:12:56.866 Range in us Cumulative Count 00:12:56.866 3.920 - 3.947: 2.3972% ( 362) 00:12:56.866 3.947 - 3.973: 9.7808% ( 1115) 00:12:56.866 3.973 - 4.000: 19.6609% ( 1492) 00:12:56.866 4.000 - 4.027: 30.3093% ( 1608) 00:12:56.866 4.027 - 4.053: 40.3351% ( 1514) 00:12:56.866 4.053 - 4.080: 51.4204% ( 1674) 00:12:56.866 4.080 - 4.107: 67.7372% ( 2464) 00:12:56.866 4.107 - 4.133: 83.4249% ( 2369) 00:12:56.866 4.133 - 4.160: 92.8018% ( 1416) 00:12:56.866 4.160 - 4.187: 97.4108% ( 696) 00:12:56.866 4.187 - 4.213: 98.8875% ( 223) 00:12:56.866 4.213 - 4.240: 99.3577% ( 71) 00:12:56.866 4.240 - 4.267: 99.4702% ( 17) 00:12:56.866 4.267 - 4.293: 99.5232% ( 8) 00:12:56.866 4.293 - 4.320: 99.5431% ( 3) 00:12:56.866 4.373 - 4.400: 99.5497% ( 1) 00:12:56.866 4.533 - 4.560: 99.5563% ( 1) 00:12:56.866 4.667 - 4.693: 99.5629% ( 1) 00:12:56.866 5.413 - 5.440: 99.5696% ( 1) 00:12:56.866 5.520 - 5.547: 99.5762% ( 1) 00:12:56.866 5.787 - 5.813: 99.5828% ( 1) 00:12:56.866 5.947 - 5.973: 99.5961% ( 2) 00:12:56.866 5.973 - 6.000: 99.6027% ( 1) 00:12:56.866 6.240 - 6.267: 99.6093% ( 1) 00:12:56.866 6.267 - 6.293: 99.6159% ( 1) 00:12:56.866 6.320 - 6.347: 99.6225% ( 1) 00:12:56.866 6.613 - 6.640: 99.6292% ( 1) 00:12:56.866 6.720 - 6.747: 99.6358% ( 1) 00:12:56.866 6.773 - 6.800: 99.6424% ( 1) 00:12:56.866 6.827 - 6.880: 99.6490% ( 1) 00:12:56.866 6.987 - 7.040: 99.6557% ( 1) 00:12:56.866 7.147 - 7.200: 99.6689% ( 2) 00:12:56.866 7.253 - 7.307: 99.6888% ( 3) 00:12:56.866 7.360 - 7.413: 99.6954% ( 1) 00:12:56.866 7.413 - 7.467: 99.7020% ( 1) 00:12:56.866 7.573 - 7.627: 99.7153% ( 2) 00:12:56.866 7.680 - 7.733: 99.7285% ( 2) 00:12:56.866 7.733 - 7.787: 99.7484% ( 3) 00:12:56.866 7.787 - 7.840: 99.7550% ( 1) 00:12:56.866 7.947 - 8.000: 99.7616% ( 1) 00:12:56.866 8.000 - 8.053: 99.7682% ( 1) 00:12:56.866 8.053 - 8.107: 99.7748% ( 1) 00:12:56.866 8.107 - 8.160: 99.7815% ( 1) 00:12:56.866 8.160 - 8.213: 99.7947% ( 2) 00:12:56.866 8.213 - 8.267: 99.8080% ( 2) 00:12:56.866 8.267 - 8.320: 99.8146% ( 1) 00:12:56.866 8.480 - 8.533: 99.8212% ( 1) 00:12:56.866 8.853 - 8.907: 99.8278% ( 1) 00:12:56.866 8.960 - 9.013: 99.8344% ( 1) 00:12:56.866 9.120 - 9.173: 99.8477% ( 2) 00:12:56.866 9.173 - 9.227: 99.8543% ( 1) 00:12:56.866 9.333 - 9.387: 99.8609% ( 1) 00:12:56.866 9.760 - 9.813: 99.8676% ( 1) 00:12:56.866 10.827 - 10.880: 99.8742% ( 1) 00:12:56.866 15.893 - 16.000: 99.8808% ( 1) 00:12:56.866 3986.773 - 4014.080: 100.0000% ( 18) 00:12:56.866 00:12:56.866 Complete histogram 00:12:56.866 ================== 00:12:56.866 Range in us Cumulative Count 00:12:56.866 2.360 - 2.373: 0.0066% ( 1) 00:12:56.866 2.373 - 2.387: 0.0132% ( 1) 00:12:56.866 2.387 - 2.400: 1.4304% ( 214) 00:12:56.866 2.400 - 2.413: 3.0925% ( 251) 00:12:56.866 2.413 - 2.427: 3.4700% ( 57) 00:12:56.866 2.427 - 2.440: 3.8938% ( 64) 00:12:56.866 2.440 - 2.453: 53.0428% ( 7422) 00:12:56.866 2.453 - 2.467: 59.0756% ( 911) 00:12:56.866 2.467 - 2.480: 71.5780% ( 1888) 00:12:56.866 2.480 - 2.493: 77.0280% ( 823) 00:12:56.866 2.493 - 2.507: 81.2993% ( 645) 00:12:56.866 2.507 - 2.520: 83.5640% ( 342) 00:12:56.866 2.520 - 2.533: 88.7491% ( 783) 00:12:56.866 2.533 - 2.547: 94.2322% ( 828) 00:12:56.866 2.547 - 2.560: 96.4439% ( 334) 00:12:56.866 2.560 - 2.573: 97.7617% ( 199) 00:12:56.866 2.573 - 2.587: 98.6690% ( 137) 00:12:56.866 2.587 - 2.600: 99.1060% ( 66) 00:12:56.866 2.600 - 2.613: 99.1722% ( 10) 00:12:56.866 2.613 - 2.627: 99.1789% ( 1) 00:12:56.866 2.627 - 2.640: 99.1921% ( 2) 00:12:56.866 2.680 - [2024-06-11 09:27:28.627007] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:56.866 2.693: 99.1987% ( 1) 00:12:56.866 4.960 - 4.987: 99.2054% ( 1) 00:12:56.866 4.987 - 5.013: 99.2120% ( 1) 00:12:56.866 5.040 - 5.067: 99.2186% ( 1) 00:12:56.866 5.147 - 5.173: 99.2252% ( 1) 00:12:56.866 5.253 - 5.280: 99.2318% ( 1) 00:12:56.866 5.307 - 5.333: 99.2385% ( 1) 00:12:56.866 5.360 - 5.387: 99.2451% ( 1) 00:12:56.866 5.440 - 5.467: 99.2517% ( 1) 00:12:56.866 5.467 - 5.493: 99.2583% ( 1) 00:12:56.866 5.600 - 5.627: 99.2649% ( 1) 00:12:56.866 5.787 - 5.813: 99.2716% ( 1) 00:12:56.866 5.867 - 5.893: 99.2782% ( 1) 00:12:56.866 5.920 - 5.947: 99.2848% ( 1) 00:12:56.866 6.000 - 6.027: 99.2914% ( 1) 00:12:56.866 6.080 - 6.107: 99.2981% ( 1) 00:12:56.866 6.107 - 6.133: 99.3047% ( 1) 00:12:56.866 6.187 - 6.213: 99.3113% ( 1) 00:12:56.866 6.347 - 6.373: 99.3179% ( 1) 00:12:56.866 6.480 - 6.507: 99.3312% ( 2) 00:12:56.866 6.533 - 6.560: 99.3444% ( 2) 00:12:56.866 6.560 - 6.587: 99.3510% ( 1) 00:12:56.866 6.587 - 6.613: 99.3577% ( 1) 00:12:56.866 6.613 - 6.640: 99.3643% ( 1) 00:12:56.866 6.747 - 6.773: 99.3775% ( 2) 00:12:56.866 6.827 - 6.880: 99.3908% ( 2) 00:12:56.866 6.933 - 6.987: 99.3974% ( 1) 00:12:56.866 6.987 - 7.040: 99.4106% ( 2) 00:12:56.866 7.040 - 7.093: 99.4173% ( 1) 00:12:56.866 7.093 - 7.147: 99.4305% ( 2) 00:12:56.866 7.413 - 7.467: 99.4371% ( 1) 00:12:56.866 7.840 - 7.893: 99.4437% ( 1) 00:12:56.866 8.800 - 8.853: 99.4504% ( 1) 00:12:56.866 11.627 - 11.680: 99.4570% ( 1) 00:12:56.866 34.133 - 34.347: 99.4636% ( 1) 00:12:56.866 43.520 - 43.733: 99.4702% ( 1) 00:12:56.866 3986.773 - 4014.080: 99.9934% ( 79) 00:12:56.866 4997.120 - 5024.427: 100.0000% ( 1) 00:12:56.866 00:12:56.866 09:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:56.866 09:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:56.866 09:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:56.866 09:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:56.866 09:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:57.127 [ 00:12:57.127 { 00:12:57.127 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:57.127 "subtype": "Discovery", 00:12:57.127 "listen_addresses": [], 00:12:57.127 "allow_any_host": true, 00:12:57.127 "hosts": [] 00:12:57.127 }, 00:12:57.127 { 00:12:57.127 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:57.127 "subtype": "NVMe", 00:12:57.127 "listen_addresses": [ 00:12:57.127 { 00:12:57.127 "trtype": "VFIOUSER", 00:12:57.127 "adrfam": "IPv4", 00:12:57.127 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:57.127 "trsvcid": "0" 00:12:57.127 } 00:12:57.127 ], 00:12:57.127 "allow_any_host": true, 00:12:57.127 "hosts": [], 00:12:57.127 "serial_number": "SPDK1", 00:12:57.127 "model_number": "SPDK bdev Controller", 00:12:57.127 "max_namespaces": 32, 00:12:57.127 "min_cntlid": 1, 00:12:57.127 "max_cntlid": 65519, 00:12:57.127 "namespaces": [ 00:12:57.127 { 00:12:57.127 "nsid": 1, 00:12:57.127 "bdev_name": "Malloc1", 00:12:57.127 "name": "Malloc1", 00:12:57.127 "nguid": "F81701457B174B608DDD9AFC2B35222C", 00:12:57.127 "uuid": "f8170145-7b17-4b60-8ddd-9afc2b35222c" 00:12:57.127 }, 00:12:57.127 { 00:12:57.127 "nsid": 2, 00:12:57.127 "bdev_name": "Malloc3", 00:12:57.127 "name": "Malloc3", 00:12:57.127 "nguid": "C9A9748ED7604F969F216407D37BC40E", 00:12:57.127 "uuid": "c9a9748e-d760-4f96-9f21-6407d37bc40e" 00:12:57.127 } 00:12:57.127 ] 00:12:57.127 }, 00:12:57.127 { 00:12:57.127 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:57.127 "subtype": "NVMe", 00:12:57.127 "listen_addresses": [ 00:12:57.127 { 00:12:57.127 "trtype": "VFIOUSER", 00:12:57.127 "adrfam": "IPv4", 00:12:57.127 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:57.127 "trsvcid": "0" 00:12:57.127 } 00:12:57.127 ], 00:12:57.127 "allow_any_host": true, 00:12:57.127 "hosts": [], 00:12:57.127 "serial_number": "SPDK2", 00:12:57.127 "model_number": "SPDK bdev Controller", 00:12:57.127 "max_namespaces": 32, 00:12:57.127 "min_cntlid": 1, 00:12:57.127 "max_cntlid": 65519, 00:12:57.127 "namespaces": [ 00:12:57.127 { 00:12:57.127 "nsid": 1, 00:12:57.127 "bdev_name": "Malloc2", 00:12:57.127 "name": "Malloc2", 00:12:57.127 "nguid": "80B56843EDB74BD4B6AFD0227202EDC3", 00:12:57.127 "uuid": "80b56843-edb7-4bd4-b6af-d0227202edc3" 00:12:57.127 } 00:12:57.127 ] 00:12:57.127 } 00:12:57.127 ] 00:12:57.127 09:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:57.127 09:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1023143 00:12:57.127 09:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:57.127 09:27:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:12:57.127 09:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:57.127 09:27:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:57.127 09:27:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:57.127 09:27:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:12:57.127 09:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:57.127 09:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:57.388 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.388 [2024-06-11 09:27:29.060835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:57.388 Malloc4 00:12:57.388 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:57.648 [2024-06-11 09:27:29.327481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:57.648 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:57.648 Asynchronous Event Request test 00:12:57.648 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:57.648 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:57.648 Registering asynchronous event callbacks... 00:12:57.648 Starting namespace attribute notice tests for all controllers... 00:12:57.648 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:57.648 aer_cb - Changed Namespace 00:12:57.648 Cleaning up... 00:12:57.910 [ 00:12:57.910 { 00:12:57.910 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:57.910 "subtype": "Discovery", 00:12:57.910 "listen_addresses": [], 00:12:57.910 "allow_any_host": true, 00:12:57.910 "hosts": [] 00:12:57.910 }, 00:12:57.910 { 00:12:57.910 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:57.910 "subtype": "NVMe", 00:12:57.910 "listen_addresses": [ 00:12:57.910 { 00:12:57.910 "trtype": "VFIOUSER", 00:12:57.910 "adrfam": "IPv4", 00:12:57.910 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:57.910 "trsvcid": "0" 00:12:57.910 } 00:12:57.910 ], 00:12:57.910 "allow_any_host": true, 00:12:57.910 "hosts": [], 00:12:57.910 "serial_number": "SPDK1", 00:12:57.910 "model_number": "SPDK bdev Controller", 00:12:57.910 "max_namespaces": 32, 00:12:57.910 "min_cntlid": 1, 00:12:57.910 "max_cntlid": 65519, 00:12:57.910 "namespaces": [ 00:12:57.910 { 00:12:57.910 "nsid": 1, 00:12:57.910 "bdev_name": "Malloc1", 00:12:57.910 "name": "Malloc1", 00:12:57.910 "nguid": "F81701457B174B608DDD9AFC2B35222C", 00:12:57.910 "uuid": "f8170145-7b17-4b60-8ddd-9afc2b35222c" 00:12:57.910 }, 00:12:57.910 { 00:12:57.910 "nsid": 2, 00:12:57.910 "bdev_name": "Malloc3", 00:12:57.910 "name": "Malloc3", 00:12:57.910 "nguid": "C9A9748ED7604F969F216407D37BC40E", 00:12:57.910 "uuid": "c9a9748e-d760-4f96-9f21-6407d37bc40e" 00:12:57.910 } 00:12:57.910 ] 00:12:57.910 }, 00:12:57.910 { 00:12:57.910 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:57.910 "subtype": "NVMe", 00:12:57.910 "listen_addresses": [ 00:12:57.910 { 00:12:57.910 "trtype": "VFIOUSER", 00:12:57.910 "adrfam": "IPv4", 00:12:57.910 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:57.910 "trsvcid": "0" 00:12:57.910 } 00:12:57.910 ], 00:12:57.910 "allow_any_host": true, 00:12:57.910 "hosts": [], 00:12:57.910 "serial_number": "SPDK2", 00:12:57.910 "model_number": "SPDK bdev Controller", 00:12:57.910 "max_namespaces": 32, 00:12:57.910 "min_cntlid": 1, 00:12:57.910 "max_cntlid": 65519, 00:12:57.910 "namespaces": [ 00:12:57.910 { 00:12:57.910 "nsid": 1, 00:12:57.910 "bdev_name": "Malloc2", 00:12:57.910 "name": "Malloc2", 00:12:57.910 "nguid": "80B56843EDB74BD4B6AFD0227202EDC3", 00:12:57.910 "uuid": "80b56843-edb7-4bd4-b6af-d0227202edc3" 00:12:57.910 }, 00:12:57.910 { 00:12:57.910 "nsid": 2, 00:12:57.910 "bdev_name": "Malloc4", 00:12:57.910 "name": "Malloc4", 00:12:57.910 "nguid": "CDE74F06DC9343FAB2BA77C08923932B", 00:12:57.910 "uuid": "cde74f06-dc93-43fa-b2ba-77c08923932b" 00:12:57.910 } 00:12:57.910 ] 00:12:57.910 } 00:12:57.910 ] 00:12:57.910 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1023143 00:12:57.910 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:57.910 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1013397 00:12:57.910 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 1013397 ']' 00:12:57.910 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 1013397 00:12:57.910 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:12:57.910 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:57.910 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1013397 00:12:57.910 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:57.910 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:57.910 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1013397' 00:12:57.910 killing process with pid 1013397 00:12:57.910 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 1013397 00:12:57.910 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 1013397 00:12:58.170 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:58.170 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:58.170 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:58.170 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:58.171 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:58.171 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1023402 00:12:58.171 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1023402' 00:12:58.171 Process pid: 1023402 00:12:58.171 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:58.171 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:58.171 09:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1023402 00:12:58.171 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 1023402 ']' 00:12:58.171 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.171 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:58.171 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.171 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:58.171 09:27:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:58.171 [2024-06-11 09:27:29.864507] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:58.171 [2024-06-11 09:27:29.865435] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:12:58.171 [2024-06-11 09:27:29.865476] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.171 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.171 [2024-06-11 09:27:29.943617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.432 [2024-06-11 09:27:30.009282] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.432 [2024-06-11 09:27:30.009324] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.432 [2024-06-11 09:27:30.009332] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.432 [2024-06-11 09:27:30.009339] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.432 [2024-06-11 09:27:30.009345] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.432 [2024-06-11 09:27:30.009572] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.432 [2024-06-11 09:27:30.009709] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.432 [2024-06-11 09:27:30.009867] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.432 [2024-06-11 09:27:30.009868] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.432 [2024-06-11 09:27:30.080561] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:58.432 [2024-06-11 09:27:30.080664] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:58.432 [2024-06-11 09:27:30.081228] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:58.432 [2024-06-11 09:27:30.081613] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:58.432 [2024-06-11 09:27:30.081684] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:59.004 09:27:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:59.004 09:27:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:12:59.004 09:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:59.948 09:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:00.209 09:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:00.209 09:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:00.209 09:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:00.209 09:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:00.209 09:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:00.484 Malloc1 00:13:00.484 09:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:00.747 09:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:00.747 09:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:01.008 09:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:01.008 09:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:01.008 09:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:01.270 Malloc2 00:13:01.270 09:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:01.530 09:27:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:01.791 09:27:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:02.051 09:27:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:02.051 09:27:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1023402 00:13:02.051 09:27:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 1023402 ']' 00:13:02.051 09:27:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 1023402 00:13:02.051 09:27:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:13:02.051 09:27:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:02.051 09:27:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1023402 00:13:02.052 09:27:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:02.052 09:27:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:02.052 09:27:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1023402' 00:13:02.052 killing process with pid 1023402 00:13:02.052 09:27:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 1023402 00:13:02.052 09:27:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 1023402 00:13:02.052 09:27:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:02.052 09:27:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:02.052 00:13:02.052 real 0m52.070s 00:13:02.052 user 3m27.159s 00:13:02.052 sys 0m3.231s 00:13:02.052 09:27:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:02.052 09:27:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:02.052 ************************************ 00:13:02.052 END TEST nvmf_vfio_user 00:13:02.052 ************************************ 00:13:02.052 09:27:33 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:02.052 09:27:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:02.052 09:27:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:02.052 09:27:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:02.313 ************************************ 00:13:02.313 START TEST nvmf_vfio_user_nvme_compliance 00:13:02.313 ************************************ 00:13:02.313 09:27:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:02.313 * Looking for test storage... 00:13:02.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:02.313 09:27:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.313 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1024246 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1024246' 00:13:02.314 Process pid: 1024246 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1024246 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # '[' -z 1024246 ']' 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:02.314 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:02.314 [2024-06-11 09:27:34.086783] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:13:02.314 [2024-06-11 09:27:34.086831] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.314 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.575 [2024-06-11 09:27:34.162505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:02.575 [2024-06-11 09:27:34.227280] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.575 [2024-06-11 09:27:34.227313] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.575 [2024-06-11 09:27:34.227327] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.575 [2024-06-11 09:27:34.227333] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.575 [2024-06-11 09:27:34.227339] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.575 [2024-06-11 09:27:34.227391] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.575 [2024-06-11 09:27:34.227509] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.575 [2024-06-11 09:27:34.227512] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.147 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:03.147 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # return 0 00:13:03.147 09:27:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:04.565 09:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:04.565 09:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:04.565 09:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:04.565 09:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.565 09:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.565 09:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.565 09:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:04.565 09:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:04.565 09:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.565 09:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.565 malloc0 00:13:04.565 09:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.565 09:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:04.565 09:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.565 09:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.565 09:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.565 09:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:04.565 09:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.565 09:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.565 09:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.565 09:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:04.565 09:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.565 09:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.565 09:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.565 09:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:04.565 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.565 00:13:04.565 00:13:04.565 CUnit - A unit testing framework for C - Version 2.1-3 00:13:04.565 http://cunit.sourceforge.net/ 00:13:04.565 00:13:04.565 00:13:04.565 Suite: nvme_compliance 00:13:04.565 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-11 09:27:36.213898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.565 [2024-06-11 09:27:36.215262] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:04.565 [2024-06-11 09:27:36.215277] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:04.565 [2024-06-11 09:27:36.215283] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:04.565 [2024-06-11 09:27:36.216925] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.565 passed 00:13:04.565 Test: admin_identify_ctrlr_verify_fused ...[2024-06-11 09:27:36.310520] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.565 [2024-06-11 09:27:36.313533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.565 passed 00:13:04.826 Test: admin_identify_ns ...[2024-06-11 09:27:36.409572] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.826 [2024-06-11 09:27:36.473326] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:04.826 [2024-06-11 09:27:36.481328] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:04.826 [2024-06-11 09:27:36.502440] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.826 passed 00:13:04.826 Test: admin_get_features_mandatory_features ...[2024-06-11 09:27:36.594111] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.826 [2024-06-11 09:27:36.597134] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.826 passed 00:13:05.086 Test: admin_get_features_optional_features ...[2024-06-11 09:27:36.690653] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.086 [2024-06-11 09:27:36.693672] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.086 passed 00:13:05.086 Test: admin_set_features_number_of_queues ...[2024-06-11 09:27:36.787812] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.086 [2024-06-11 09:27:36.893432] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.346 passed 00:13:05.346 Test: admin_get_log_page_mandatory_logs ...[2024-06-11 09:27:36.984106] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.346 [2024-06-11 09:27:36.987121] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.346 passed 00:13:05.346 Test: admin_get_log_page_with_lpo ...[2024-06-11 09:27:37.080584] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.346 [2024-06-11 09:27:37.148324] ctrlr.c:2656:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:05.346 [2024-06-11 09:27:37.161390] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.606 passed 00:13:05.606 Test: fabric_property_get ...[2024-06-11 09:27:37.254468] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.606 [2024-06-11 09:27:37.255745] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:05.606 [2024-06-11 09:27:37.258497] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.606 passed 00:13:05.606 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-11 09:27:37.352062] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.606 [2024-06-11 09:27:37.353284] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:05.606 [2024-06-11 09:27:37.355073] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.606 passed 00:13:05.867 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-11 09:27:37.446579] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.867 [2024-06-11 09:27:37.534323] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:05.867 [2024-06-11 09:27:37.550330] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:05.867 [2024-06-11 09:27:37.555413] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.867 passed 00:13:05.867 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-11 09:27:37.646013] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.867 [2024-06-11 09:27:37.647229] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:05.867 [2024-06-11 09:27:37.649032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.127 passed 00:13:06.127 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-11 09:27:37.742163] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.127 [2024-06-11 09:27:37.817325] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:06.128 [2024-06-11 09:27:37.841321] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:06.128 [2024-06-11 09:27:37.846413] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.128 passed 00:13:06.128 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-11 09:27:37.938430] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.128 [2024-06-11 09:27:37.939656] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:06.128 [2024-06-11 09:27:37.939677] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:06.128 [2024-06-11 09:27:37.941449] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.388 passed 00:13:06.388 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-11 09:27:38.036590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.388 [2024-06-11 09:27:38.128320] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:06.388 [2024-06-11 09:27:38.136319] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:06.388 [2024-06-11 09:27:38.144319] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:06.388 [2024-06-11 09:27:38.152323] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:06.388 [2024-06-11 09:27:38.181402] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.648 passed 00:13:06.648 Test: admin_create_io_sq_verify_pc ...[2024-06-11 09:27:38.273008] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.648 [2024-06-11 09:27:38.289328] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:06.648 [2024-06-11 09:27:38.307169] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.648 passed 00:13:06.648 Test: admin_create_io_qp_max_qps ...[2024-06-11 09:27:38.400722] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:08.032 [2024-06-11 09:27:39.501326] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:08.292 [2024-06-11 09:27:39.876109] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:08.292 passed 00:13:08.292 Test: admin_create_io_sq_shared_cq ...[2024-06-11 09:27:39.968571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:08.292 [2024-06-11 09:27:40.104324] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:08.553 [2024-06-11 09:27:40.148400] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:08.553 passed 00:13:08.553 00:13:08.553 Run Summary: Type Total Ran Passed Failed Inactive 00:13:08.553 suites 1 1 n/a 0 0 00:13:08.553 tests 18 18 18 0 0 00:13:08.553 asserts 360 360 360 0 n/a 00:13:08.553 00:13:08.553 Elapsed time = 1.648 seconds 00:13:08.553 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1024246 00:13:08.553 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@949 -- # '[' -z 1024246 ']' 00:13:08.553 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # kill -0 1024246 00:13:08.553 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # uname 00:13:08.553 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:08.553 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1024246 00:13:08.553 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:08.553 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:08.553 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1024246' 00:13:08.553 killing process with pid 1024246 00:13:08.553 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # kill 1024246 00:13:08.553 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # wait 1024246 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:08.814 00:13:08.814 real 0m6.494s 00:13:08.814 user 0m18.694s 00:13:08.814 sys 0m0.450s 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:08.814 ************************************ 00:13:08.814 END TEST nvmf_vfio_user_nvme_compliance 00:13:08.814 ************************************ 00:13:08.814 09:27:40 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:08.814 09:27:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:08.814 09:27:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:08.814 09:27:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.814 ************************************ 00:13:08.814 START TEST nvmf_vfio_user_fuzz 00:13:08.814 ************************************ 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:08.814 * Looking for test storage... 00:13:08.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.814 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1026179 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1026179' 00:13:08.815 Process pid: 1026179 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1026179 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # '[' -z 1026179 ']' 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:08.815 09:27:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:09.758 09:27:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:09.758 09:27:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # return 0 00:13:09.758 09:27:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:11.144 malloc0 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.144 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:11.145 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.145 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:11.145 09:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:43.255 Fuzzing completed. Shutting down the fuzz application 00:13:43.255 00:13:43.255 Dumping successful admin opcodes: 00:13:43.255 8, 9, 10, 24, 00:13:43.255 Dumping successful io opcodes: 00:13:43.255 0, 00:13:43.255 NS: 0x200003a1ef00 I/O qp, Total commands completed: 890300, total successful commands: 3466, random_seed: 2903587136 00:13:43.255 NS: 0x200003a1ef00 admin qp, Total commands completed: 172669, total successful commands: 1402, random_seed: 1181452672 00:13:43.255 09:28:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:43.255 09:28:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:43.255 09:28:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:43.255 09:28:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:43.255 09:28:12 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1026179 00:13:43.255 09:28:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@949 -- # '[' -z 1026179 ']' 00:13:43.255 09:28:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # kill -0 1026179 00:13:43.255 09:28:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # uname 00:13:43.255 09:28:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:43.255 09:28:12 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1026179 00:13:43.255 09:28:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:43.255 09:28:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:43.255 09:28:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1026179' 00:13:43.255 killing process with pid 1026179 00:13:43.255 09:28:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # kill 1026179 00:13:43.255 09:28:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # wait 1026179 00:13:43.255 09:28:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:43.255 09:28:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:43.255 00:13:43.255 real 0m32.789s 00:13:43.255 user 0m36.043s 00:13:43.255 sys 0m25.617s 00:13:43.255 09:28:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:43.255 09:28:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:43.255 ************************************ 00:13:43.255 END TEST nvmf_vfio_user_fuzz 00:13:43.255 ************************************ 00:13:43.255 09:28:13 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:43.255 09:28:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:43.255 09:28:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:43.255 09:28:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:43.255 ************************************ 00:13:43.255 START TEST nvmf_host_management 00:13:43.255 ************************************ 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:43.255 * Looking for test storage... 00:13:43.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.255 09:28:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:43.256 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:43.256 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.256 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:43.256 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:43.256 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:43.256 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.256 09:28:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.256 09:28:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.256 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:43.256 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:43.256 09:28:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:43.256 09:28:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:49.846 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:49.846 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:49.846 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:49.846 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:49.846 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:49.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:13:49.847 00:13:49.847 --- 10.0.0.2 ping statistics --- 00:13:49.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.847 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:49.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:13:49.847 00:13:49.847 --- 10.0.0.1 ping statistics --- 00:13:49.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.847 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1040292 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1040292 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 1040292 ']' 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:49.847 09:28:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.847 [2024-06-11 09:28:20.789554] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:13:49.847 [2024-06-11 09:28:20.789610] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.847 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.847 [2024-06-11 09:28:20.852281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.847 [2024-06-11 09:28:20.922168] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.847 [2024-06-11 09:28:20.922201] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.847 [2024-06-11 09:28:20.922209] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.847 [2024-06-11 09:28:20.922216] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.847 [2024-06-11 09:28:20.922223] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.847 [2024-06-11 09:28:20.922340] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.847 [2024-06-11 09:28:20.922504] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.847 [2024-06-11 09:28:20.922610] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:13:49.847 [2024-06-11 09:28:20.922616] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.847 [2024-06-11 09:28:21.062174] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.847 Malloc0 00:13:49.847 [2024-06-11 09:28:21.121565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1040459 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1040459 /var/tmp/bdevperf.sock 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 1040459 ']' 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:49.847 { 00:13:49.847 "params": { 00:13:49.847 "name": "Nvme$subsystem", 00:13:49.847 "trtype": "$TEST_TRANSPORT", 00:13:49.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:49.847 "adrfam": "ipv4", 00:13:49.847 "trsvcid": "$NVMF_PORT", 00:13:49.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:49.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:49.847 "hdgst": ${hdgst:-false}, 00:13:49.847 "ddgst": ${ddgst:-false} 00:13:49.847 }, 00:13:49.847 "method": "bdev_nvme_attach_controller" 00:13:49.847 } 00:13:49.847 EOF 00:13:49.847 )") 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:49.847 09:28:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:49.847 "params": { 00:13:49.847 "name": "Nvme0", 00:13:49.847 "trtype": "tcp", 00:13:49.847 "traddr": "10.0.0.2", 00:13:49.847 "adrfam": "ipv4", 00:13:49.847 "trsvcid": "4420", 00:13:49.847 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:49.847 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:49.847 "hdgst": false, 00:13:49.847 "ddgst": false 00:13:49.847 }, 00:13:49.847 "method": "bdev_nvme_attach_controller" 00:13:49.847 }' 00:13:49.847 [2024-06-11 09:28:21.220896] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:13:49.848 [2024-06-11 09:28:21.220942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040459 ] 00:13:49.848 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.848 [2024-06-11 09:28:21.296109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.848 [2024-06-11 09:28:21.360907] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.848 Running I/O for 10 seconds... 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:50.421 09:28:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.421 [2024-06-11 09:28:22.181078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f1180 is same with the state(5) to be set 00:13:50.421 [2024-06-11 09:28:22.181123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f1180 is same with the state(5) to be set 00:13:50.421 [2024-06-11 09:28:22.181551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.181983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.181993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.182005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.421 [2024-06-11 09:28:22.182016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.421 [2024-06-11 09:28:22.182027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.422 [2024-06-11 09:28:22.182840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.422 [2024-06-11 09:28:22.182849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.423 [2024-06-11 09:28:22.182860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.423 [2024-06-11 09:28:22.182871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.423 [2024-06-11 09:28:22.182884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.423 [2024-06-11 09:28:22.182894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.423 [2024-06-11 09:28:22.182907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.423 [2024-06-11 09:28:22.182916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.423 [2024-06-11 09:28:22.182928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.423 [2024-06-11 09:28:22.182937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.423 [2024-06-11 09:28:22.182947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:50.423 [2024-06-11 09:28:22.182957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.423 [2024-06-11 09:28:22.183011] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17d34b0 was disconnected and freed. reset controller. 00:13:50.423 [2024-06-11 09:28:22.184218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:50.423 task offset: 111872 on job bdev=Nvme0n1 fails 00:13:50.423 00:13:50.423 Latency(us) 00:13:50.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.423 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:50.423 Job: Nvme0n1 ended in about 0.64 seconds with error 00:13:50.423 Verification LBA range: start 0x0 length 0x400 00:13:50.423 Nvme0n1 : 0.64 1309.23 81.83 100.71 0.00 44407.58 1897.81 40195.41 00:13:50.423 =================================================================================================================== 00:13:50.423 Total : 1309.23 81.83 100.71 0.00 44407.58 1897.81 40195.41 00:13:50.423 09:28:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:50.423 [2024-06-11 09:28:22.186231] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:50.423 [2024-06-11 09:28:22.186257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139a510 (9): Bad file descriptor 00:13:50.423 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:50.423 09:28:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:50.423 09:28:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.423 09:28:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:50.423 09:28:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:50.423 [2024-06-11 09:28:22.207577] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:51.807 09:28:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1040459 00:13:51.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1040459) - No such process 00:13:51.807 09:28:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:51.807 09:28:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:51.807 09:28:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:51.807 09:28:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:51.807 09:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:51.807 09:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:51.807 09:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:51.807 09:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:51.807 { 00:13:51.807 "params": { 00:13:51.807 "name": "Nvme$subsystem", 00:13:51.807 "trtype": "$TEST_TRANSPORT", 00:13:51.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:51.807 "adrfam": "ipv4", 00:13:51.807 "trsvcid": "$NVMF_PORT", 00:13:51.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:51.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:51.807 "hdgst": ${hdgst:-false}, 00:13:51.807 "ddgst": ${ddgst:-false} 00:13:51.807 }, 00:13:51.807 "method": "bdev_nvme_attach_controller" 00:13:51.807 } 00:13:51.807 EOF 00:13:51.807 )") 00:13:51.807 09:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:51.807 09:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:51.807 09:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:51.807 09:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:51.807 "params": { 00:13:51.807 "name": "Nvme0", 00:13:51.807 "trtype": "tcp", 00:13:51.807 "traddr": "10.0.0.2", 00:13:51.807 "adrfam": "ipv4", 00:13:51.807 "trsvcid": "4420", 00:13:51.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:51.807 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:51.807 "hdgst": false, 00:13:51.807 "ddgst": false 00:13:51.807 }, 00:13:51.807 "method": "bdev_nvme_attach_controller" 00:13:51.807 }' 00:13:51.807 [2024-06-11 09:28:23.253383] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:13:51.807 [2024-06-11 09:28:23.253436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041056 ] 00:13:51.807 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.807 [2024-06-11 09:28:23.317910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.807 [2024-06-11 09:28:23.381555] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.807 Running I/O for 1 seconds... 00:13:52.750 00:13:52.750 Latency(us) 00:13:52.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.750 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:52.750 Verification LBA range: start 0x0 length 0x400 00:13:52.750 Nvme0n1 : 1.02 1383.68 86.48 0.00 0.00 45509.35 11359.57 35389.44 00:13:52.750 =================================================================================================================== 00:13:52.750 Total : 1383.68 86.48 0.00 0.00 45509.35 11359.57 35389.44 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:53.014 rmmod nvme_tcp 00:13:53.014 rmmod nvme_fabrics 00:13:53.014 rmmod nvme_keyring 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1040292 ']' 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1040292 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 1040292 ']' 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 1040292 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:53.014 09:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1040292 00:13:53.275 09:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:53.275 09:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:53.275 09:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1040292' 00:13:53.275 killing process with pid 1040292 00:13:53.275 09:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 1040292 00:13:53.275 09:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 1040292 00:13:53.275 [2024-06-11 09:28:24.955919] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:53.275 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:53.275 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:53.275 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:53.275 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:53.275 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:53.275 09:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.275 09:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.275 09:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.819 09:28:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:55.819 09:28:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:55.819 00:13:55.819 real 0m13.725s 00:13:55.819 user 0m20.454s 00:13:55.819 sys 0m6.402s 00:13:55.819 09:28:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:55.819 09:28:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:55.819 ************************************ 00:13:55.819 END TEST nvmf_host_management 00:13:55.819 ************************************ 00:13:55.819 09:28:27 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:55.819 09:28:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:55.819 09:28:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:55.819 09:28:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:55.819 ************************************ 00:13:55.819 START TEST nvmf_lvol 00:13:55.819 ************************************ 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:55.819 * Looking for test storage... 00:13:55.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:55.819 09:28:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:03.964 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.964 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:03.964 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:03.964 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:03.964 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:03.964 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:03.964 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:03.965 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:03.965 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:03.965 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:03.965 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:03.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:14:03.965 00:14:03.965 --- 10.0.0.2 ping statistics --- 00:14:03.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.965 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:14:03.965 00:14:03.965 --- 10.0.0.1 ping statistics --- 00:14:03.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.965 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1047315 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1047315 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 1047315 ']' 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:03.965 09:28:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:03.965 [2024-06-11 09:28:34.656500] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:14:03.965 [2024-06-11 09:28:34.656565] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.965 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.965 [2024-06-11 09:28:34.745885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:03.965 [2024-06-11 09:28:34.847586] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.966 [2024-06-11 09:28:34.847643] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.966 [2024-06-11 09:28:34.847654] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.966 [2024-06-11 09:28:34.847663] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.966 [2024-06-11 09:28:34.847672] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.966 [2024-06-11 09:28:34.847812] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.966 [2024-06-11 09:28:34.847941] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.966 [2024-06-11 09:28:34.847943] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.966 09:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:03.966 09:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:14:03.966 09:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.966 09:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:03.966 09:28:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:03.966 09:28:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.966 09:28:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:04.227 [2024-06-11 09:28:35.783088] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.227 09:28:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:04.487 09:28:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:04.487 09:28:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:04.487 09:28:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:04.487 09:28:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:04.748 09:28:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:05.008 09:28:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=281dfe39-0d04-475d-b2c9-6f7a40b49f64 00:14:05.008 09:28:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 281dfe39-0d04-475d-b2c9-6f7a40b49f64 lvol 20 00:14:05.269 09:28:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3ffb9370-592d-4d9d-aef6-2bdcbf851638 00:14:05.269 09:28:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:05.529 09:28:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3ffb9370-592d-4d9d-aef6-2bdcbf851638 00:14:05.790 09:28:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:05.790 [2024-06-11 09:28:37.547331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.790 09:28:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:06.051 09:28:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1048600 00:14:06.051 09:28:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:06.051 09:28:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:06.051 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.994 09:28:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3ffb9370-592d-4d9d-aef6-2bdcbf851638 MY_SNAPSHOT 00:14:07.255 09:28:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=17a8fa08-f1cf-412c-bad0-56ba9f097d7a 00:14:07.255 09:28:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3ffb9370-592d-4d9d-aef6-2bdcbf851638 30 00:14:07.516 09:28:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 17a8fa08-f1cf-412c-bad0-56ba9f097d7a MY_CLONE 00:14:07.776 09:28:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=71ba3f1f-486a-437a-8bc1-9130b07903e8 00:14:07.776 09:28:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 71ba3f1f-486a-437a-8bc1-9130b07903e8 00:14:08.348 09:28:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1048600 00:14:16.521 Initializing NVMe Controllers 00:14:16.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:16.521 Controller IO queue size 128, less than required. 00:14:16.521 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:16.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:16.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:16.521 Initialization complete. Launching workers. 00:14:16.521 ======================================================== 00:14:16.521 Latency(us) 00:14:16.521 Device Information : IOPS MiB/s Average min max 00:14:16.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12075.00 47.17 10609.04 1616.22 56315.44 00:14:16.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12398.80 48.43 10330.77 3957.46 51293.07 00:14:16.521 ======================================================== 00:14:16.521 Total : 24473.80 95.60 10468.06 1616.22 56315.44 00:14:16.521 00:14:16.521 09:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:16.521 09:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3ffb9370-592d-4d9d-aef6-2bdcbf851638 00:14:16.782 09:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 281dfe39-0d04-475d-b2c9-6f7a40b49f64 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:17.042 rmmod nvme_tcp 00:14:17.042 rmmod nvme_fabrics 00:14:17.042 rmmod nvme_keyring 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1047315 ']' 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1047315 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 1047315 ']' 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 1047315 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1047315 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1047315' 00:14:17.042 killing process with pid 1047315 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 1047315 00:14:17.042 09:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 1047315 00:14:17.301 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:17.301 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:17.301 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:17.301 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.301 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.301 09:28:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.301 09:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.301 09:28:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.215 09:28:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.215 00:14:19.215 real 0m23.884s 00:14:19.215 user 1m5.868s 00:14:19.215 sys 0m7.937s 00:14:19.215 09:28:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:19.215 09:28:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:19.215 ************************************ 00:14:19.215 END TEST nvmf_lvol 00:14:19.215 ************************************ 00:14:19.475 09:28:51 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:19.475 09:28:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:19.475 09:28:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:19.475 09:28:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:19.475 ************************************ 00:14:19.475 START TEST nvmf_lvs_grow 00:14:19.475 ************************************ 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:19.475 * Looking for test storage... 00:14:19.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.475 09:28:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:19.476 09:28:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:27.698 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.698 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:27.699 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:27.699 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:27.699 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:27.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.767 ms 00:14:27.699 00:14:27.699 --- 10.0.0.2 ping statistics --- 00:14:27.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.699 rtt min/avg/max/mdev = 0.767/0.767/0.767/0.000 ms 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:14:27.699 00:14:27.699 --- 10.0.0.1 ping statistics --- 00:14:27.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.699 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1058472 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1058472 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 1058472 ']' 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:27.699 09:28:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:27.699 [2024-06-11 09:28:58.541159] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:14:27.699 [2024-06-11 09:28:58.541220] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.699 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.699 [2024-06-11 09:28:58.629252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.699 [2024-06-11 09:28:58.728946] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.699 [2024-06-11 09:28:58.729002] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.699 [2024-06-11 09:28:58.729017] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.699 [2024-06-11 09:28:58.729024] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.699 [2024-06-11 09:28:58.729031] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.699 [2024-06-11 09:28:58.729059] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.699 09:28:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:27.699 09:28:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:14:27.699 09:28:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:27.699 09:28:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:27.699 09:28:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:27.699 09:28:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.699 09:28:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:27.960 [2024-06-11 09:28:59.675494] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.960 09:28:59 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:27.960 09:28:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:27.960 09:28:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:27.960 09:28:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:27.960 ************************************ 00:14:27.960 START TEST lvs_grow_clean 00:14:27.960 ************************************ 00:14:27.960 09:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:14:27.960 09:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:27.960 09:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:27.960 09:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:27.960 09:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:27.960 09:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:27.960 09:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:27.960 09:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:27.960 09:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:27.960 09:28:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:28.221 09:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:28.221 09:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:28.481 09:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c43b4290-8fb5-49cb-8f2f-737fd551163f 00:14:28.481 09:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43b4290-8fb5-49cb-8f2f-737fd551163f 00:14:28.481 09:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:28.743 09:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:28.743 09:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:28.743 09:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c43b4290-8fb5-49cb-8f2f-737fd551163f lvol 150 00:14:29.003 09:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6073925b-7883-418b-8c92-dd9035607548 00:14:29.003 09:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:29.003 09:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:29.263 [2024-06-11 09:29:00.895242] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:29.263 [2024-06-11 09:29:00.895303] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:29.263 true 00:14:29.263 09:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43b4290-8fb5-49cb-8f2f-737fd551163f 00:14:29.263 09:29:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:29.524 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:29.524 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:29.524 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6073925b-7883-418b-8c92-dd9035607548 00:14:29.785 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:30.046 [2024-06-11 09:29:01.721823] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.046 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:30.306 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1059680 00:14:30.306 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:30.306 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:30.306 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1059680 /var/tmp/bdevperf.sock 00:14:30.306 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 1059680 ']' 00:14:30.306 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:30.306 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:30.306 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:30.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:30.306 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:30.306 09:29:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:30.306 [2024-06-11 09:29:02.004894] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:14:30.306 [2024-06-11 09:29:02.004971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059680 ] 00:14:30.306 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.306 [2024-06-11 09:29:02.069581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.567 [2024-06-11 09:29:02.143755] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.137 09:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:31.137 09:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:14:31.137 09:29:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:31.708 Nvme0n1 00:14:31.708 09:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:31.708 [ 00:14:31.708 { 00:14:31.708 "name": "Nvme0n1", 00:14:31.708 "aliases": [ 00:14:31.708 "6073925b-7883-418b-8c92-dd9035607548" 00:14:31.708 ], 00:14:31.708 "product_name": "NVMe disk", 00:14:31.708 "block_size": 4096, 00:14:31.708 "num_blocks": 38912, 00:14:31.708 "uuid": "6073925b-7883-418b-8c92-dd9035607548", 00:14:31.708 "assigned_rate_limits": { 00:14:31.708 "rw_ios_per_sec": 0, 00:14:31.708 "rw_mbytes_per_sec": 0, 00:14:31.708 "r_mbytes_per_sec": 0, 00:14:31.708 "w_mbytes_per_sec": 0 00:14:31.708 }, 00:14:31.708 "claimed": false, 00:14:31.708 "zoned": false, 00:14:31.708 "supported_io_types": { 00:14:31.708 "read": true, 00:14:31.708 "write": true, 00:14:31.708 "unmap": true, 00:14:31.708 "write_zeroes": true, 00:14:31.708 "flush": true, 00:14:31.708 "reset": true, 00:14:31.708 "compare": true, 00:14:31.708 "compare_and_write": true, 00:14:31.708 "abort": true, 00:14:31.708 "nvme_admin": true, 00:14:31.708 "nvme_io": true 00:14:31.708 }, 00:14:31.708 "memory_domains": [ 00:14:31.708 { 00:14:31.708 "dma_device_id": "system", 00:14:31.708 "dma_device_type": 1 00:14:31.708 } 00:14:31.708 ], 00:14:31.708 "driver_specific": { 00:14:31.708 "nvme": [ 00:14:31.708 { 00:14:31.708 "trid": { 00:14:31.708 "trtype": "TCP", 00:14:31.708 "adrfam": "IPv4", 00:14:31.708 "traddr": "10.0.0.2", 00:14:31.708 "trsvcid": "4420", 00:14:31.708 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:31.708 }, 00:14:31.708 "ctrlr_data": { 00:14:31.708 "cntlid": 1, 00:14:31.708 "vendor_id": "0x8086", 00:14:31.708 "model_number": "SPDK bdev Controller", 00:14:31.708 "serial_number": "SPDK0", 00:14:31.708 "firmware_revision": "24.09", 00:14:31.708 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:31.708 "oacs": { 00:14:31.708 "security": 0, 00:14:31.708 "format": 0, 00:14:31.708 "firmware": 0, 00:14:31.708 "ns_manage": 0 00:14:31.708 }, 00:14:31.708 "multi_ctrlr": true, 00:14:31.708 "ana_reporting": false 00:14:31.708 }, 00:14:31.708 "vs": { 00:14:31.708 "nvme_version": "1.3" 00:14:31.708 }, 00:14:31.708 "ns_data": { 00:14:31.708 "id": 1, 00:14:31.708 "can_share": true 00:14:31.708 } 00:14:31.708 } 00:14:31.708 ], 00:14:31.708 "mp_policy": "active_passive" 00:14:31.708 } 00:14:31.708 } 00:14:31.708 ] 00:14:31.708 09:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1060069 00:14:31.708 09:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:31.708 09:29:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:31.968 Running I/O for 10 seconds... 00:14:32.931 Latency(us) 00:14:32.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.931 Nvme0n1 : 1.00 18058.00 70.54 0.00 0.00 0.00 0.00 0.00 00:14:32.931 =================================================================================================================== 00:14:32.931 Total : 18058.00 70.54 0.00 0.00 0.00 0.00 0.00 00:14:32.931 00:14:33.873 09:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c43b4290-8fb5-49cb-8f2f-737fd551163f 00:14:33.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.873 Nvme0n1 : 2.00 18149.00 70.89 0.00 0.00 0.00 0.00 0.00 00:14:33.873 =================================================================================================================== 00:14:33.873 Total : 18149.00 70.89 0.00 0.00 0.00 0.00 0.00 00:14:33.873 00:14:33.873 true 00:14:33.873 09:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43b4290-8fb5-49cb-8f2f-737fd551163f 00:14:33.873 09:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:34.134 09:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:34.134 09:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:34.134 09:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1060069 00:14:35.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.075 Nvme0n1 : 3.00 18157.33 70.93 0.00 0.00 0.00 0.00 0.00 00:14:35.075 =================================================================================================================== 00:14:35.075 Total : 18157.33 70.93 0.00 0.00 0.00 0.00 0.00 00:14:35.075 00:14:36.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.017 Nvme0n1 : 4.00 18192.00 71.06 0.00 0.00 0.00 0.00 0.00 00:14:36.018 =================================================================================================================== 00:14:36.018 Total : 18192.00 71.06 0.00 0.00 0.00 0.00 0.00 00:14:36.018 00:14:36.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.958 Nvme0n1 : 5.00 18213.40 71.15 0.00 0.00 0.00 0.00 0.00 00:14:36.958 =================================================================================================================== 00:14:36.958 Total : 18213.40 71.15 0.00 0.00 0.00 0.00 0.00 00:14:36.958 00:14:37.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.900 Nvme0n1 : 6.00 18227.33 71.20 0.00 0.00 0.00 0.00 0.00 00:14:37.900 =================================================================================================================== 00:14:37.900 Total : 18227.33 71.20 0.00 0.00 0.00 0.00 0.00 00:14:37.900 00:14:38.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.842 Nvme0n1 : 7.00 18219.71 71.17 0.00 0.00 0.00 0.00 0.00 00:14:38.842 =================================================================================================================== 00:14:38.842 Total : 18219.71 71.17 0.00 0.00 0.00 0.00 0.00 00:14:38.842 00:14:39.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.784 Nvme0n1 : 8.00 18235.62 71.23 0.00 0.00 0.00 0.00 0.00 00:14:39.784 =================================================================================================================== 00:14:39.784 Total : 18235.62 71.23 0.00 0.00 0.00 0.00 0.00 00:14:39.784 00:14:41.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.169 Nvme0n1 : 9.00 18243.22 71.26 0.00 0.00 0.00 0.00 0.00 00:14:41.169 =================================================================================================================== 00:14:41.169 Total : 18243.22 71.26 0.00 0.00 0.00 0.00 0.00 00:14:41.169 00:14:42.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.112 Nvme0n1 : 10.00 18255.50 71.31 0.00 0.00 0.00 0.00 0.00 00:14:42.112 =================================================================================================================== 00:14:42.112 Total : 18255.50 71.31 0.00 0.00 0.00 0.00 0.00 00:14:42.112 00:14:42.112 00:14:42.112 Latency(us) 00:14:42.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.112 Nvme0n1 : 10.01 18256.63 71.31 0.00 0.00 7008.44 4287.15 14199.47 00:14:42.112 =================================================================================================================== 00:14:42.112 Total : 18256.63 71.31 0.00 0.00 7008.44 4287.15 14199.47 00:14:42.112 0 00:14:42.112 09:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1059680 00:14:42.112 09:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 1059680 ']' 00:14:42.112 09:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 1059680 00:14:42.112 09:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:14:42.112 09:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:42.112 09:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1059680 00:14:42.112 09:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:42.112 09:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:42.112 09:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1059680' 00:14:42.112 killing process with pid 1059680 00:14:42.112 09:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 1059680 00:14:42.112 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.112 00:14:42.112 Latency(us) 00:14:42.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.112 =================================================================================================================== 00:14:42.112 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:42.112 09:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 1059680 00:14:42.112 09:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:42.374 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:42.636 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43b4290-8fb5-49cb-8f2f-737fd551163f 00:14:42.636 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:42.896 [2024-06-11 09:29:14.657860] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43b4290-8fb5-49cb-8f2f-737fd551163f 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43b4290-8fb5-49cb-8f2f-737fd551163f 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:42.896 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43b4290-8fb5-49cb-8f2f-737fd551163f 00:14:43.158 request: 00:14:43.158 { 00:14:43.158 "uuid": "c43b4290-8fb5-49cb-8f2f-737fd551163f", 00:14:43.158 "method": "bdev_lvol_get_lvstores", 00:14:43.158 "req_id": 1 00:14:43.158 } 00:14:43.158 Got JSON-RPC error response 00:14:43.158 response: 00:14:43.158 { 00:14:43.158 "code": -19, 00:14:43.158 "message": "No such device" 00:14:43.158 } 00:14:43.158 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:14:43.158 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:43.158 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:43.158 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:43.158 09:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:43.420 aio_bdev 00:14:43.420 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6073925b-7883-418b-8c92-dd9035607548 00:14:43.420 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=6073925b-7883-418b-8c92-dd9035607548 00:14:43.420 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:43.420 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:14:43.420 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:43.420 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:43.420 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:43.680 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6073925b-7883-418b-8c92-dd9035607548 -t 2000 00:14:43.680 [ 00:14:43.680 { 00:14:43.680 "name": "6073925b-7883-418b-8c92-dd9035607548", 00:14:43.680 "aliases": [ 00:14:43.680 "lvs/lvol" 00:14:43.680 ], 00:14:43.680 "product_name": "Logical Volume", 00:14:43.680 "block_size": 4096, 00:14:43.680 "num_blocks": 38912, 00:14:43.680 "uuid": "6073925b-7883-418b-8c92-dd9035607548", 00:14:43.680 "assigned_rate_limits": { 00:14:43.680 "rw_ios_per_sec": 0, 00:14:43.680 "rw_mbytes_per_sec": 0, 00:14:43.680 "r_mbytes_per_sec": 0, 00:14:43.680 "w_mbytes_per_sec": 0 00:14:43.680 }, 00:14:43.680 "claimed": false, 00:14:43.680 "zoned": false, 00:14:43.680 "supported_io_types": { 00:14:43.680 "read": true, 00:14:43.680 "write": true, 00:14:43.680 "unmap": true, 00:14:43.680 "write_zeroes": true, 00:14:43.680 "flush": false, 00:14:43.680 "reset": true, 00:14:43.680 "compare": false, 00:14:43.680 "compare_and_write": false, 00:14:43.680 "abort": false, 00:14:43.680 "nvme_admin": false, 00:14:43.680 "nvme_io": false 00:14:43.680 }, 00:14:43.680 "driver_specific": { 00:14:43.680 "lvol": { 00:14:43.680 "lvol_store_uuid": "c43b4290-8fb5-49cb-8f2f-737fd551163f", 00:14:43.680 "base_bdev": "aio_bdev", 00:14:43.680 "thin_provision": false, 00:14:43.680 "num_allocated_clusters": 38, 00:14:43.680 "snapshot": false, 00:14:43.680 "clone": false, 00:14:43.680 "esnap_clone": false 00:14:43.680 } 00:14:43.680 } 00:14:43.680 } 00:14:43.680 ] 00:14:43.941 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:14:43.941 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43b4290-8fb5-49cb-8f2f-737fd551163f 00:14:43.941 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:43.941 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:43.941 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43b4290-8fb5-49cb-8f2f-737fd551163f 00:14:43.941 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:44.202 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:44.202 09:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6073925b-7883-418b-8c92-dd9035607548 00:14:44.463 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c43b4290-8fb5-49cb-8f2f-737fd551163f 00:14:44.724 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:44.985 00:14:44.985 real 0m16.857s 00:14:44.985 user 0m16.644s 00:14:44.985 sys 0m1.450s 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:44.985 ************************************ 00:14:44.985 END TEST lvs_grow_clean 00:14:44.985 ************************************ 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:44.985 ************************************ 00:14:44.985 START TEST lvs_grow_dirty 00:14:44.985 ************************************ 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:44.985 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:45.276 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:45.276 09:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:45.536 09:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=85ba4870-c522-4415-a43b-406f6df66550 00:14:45.536 09:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba4870-c522-4415-a43b-406f6df66550 00:14:45.536 09:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:45.537 09:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:45.537 09:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:45.537 09:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 85ba4870-c522-4415-a43b-406f6df66550 lvol 150 00:14:45.797 09:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c720fdde-2d8e-4420-ab86-f2036de56459 00:14:45.797 09:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.797 09:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:46.058 [2024-06-11 09:29:17.711922] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:46.058 [2024-06-11 09:29:17.711974] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:46.058 true 00:14:46.058 09:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba4870-c522-4415-a43b-406f6df66550 00:14:46.058 09:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:46.319 09:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:46.319 09:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:46.579 09:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c720fdde-2d8e-4420-ab86-f2036de56459 00:14:46.579 09:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:46.839 [2024-06-11 09:29:18.542378] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.839 09:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:47.099 09:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1065540 00:14:47.099 09:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:47.099 09:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:47.099 09:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1065540 /var/tmp/bdevperf.sock 00:14:47.099 09:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 1065540 ']' 00:14:47.099 09:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.099 09:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:47.100 09:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.100 09:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:47.100 09:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:47.100 [2024-06-11 09:29:18.809943] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:14:47.100 [2024-06-11 09:29:18.809993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1065540 ] 00:14:47.100 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.100 [2024-06-11 09:29:18.867727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.361 [2024-06-11 09:29:18.932980] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.361 09:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:47.361 09:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:14:47.361 09:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:47.621 Nvme0n1 00:14:47.621 09:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:47.883 [ 00:14:47.883 { 00:14:47.883 "name": "Nvme0n1", 00:14:47.883 "aliases": [ 00:14:47.883 "c720fdde-2d8e-4420-ab86-f2036de56459" 00:14:47.883 ], 00:14:47.883 "product_name": "NVMe disk", 00:14:47.883 "block_size": 4096, 00:14:47.883 "num_blocks": 38912, 00:14:47.883 "uuid": "c720fdde-2d8e-4420-ab86-f2036de56459", 00:14:47.883 "assigned_rate_limits": { 00:14:47.883 "rw_ios_per_sec": 0, 00:14:47.883 "rw_mbytes_per_sec": 0, 00:14:47.883 "r_mbytes_per_sec": 0, 00:14:47.883 "w_mbytes_per_sec": 0 00:14:47.883 }, 00:14:47.883 "claimed": false, 00:14:47.883 "zoned": false, 00:14:47.883 "supported_io_types": { 00:14:47.883 "read": true, 00:14:47.883 "write": true, 00:14:47.883 "unmap": true, 00:14:47.883 "write_zeroes": true, 00:14:47.883 "flush": true, 00:14:47.883 "reset": true, 00:14:47.883 "compare": true, 00:14:47.883 "compare_and_write": true, 00:14:47.883 "abort": true, 00:14:47.883 "nvme_admin": true, 00:14:47.883 "nvme_io": true 00:14:47.883 }, 00:14:47.883 "memory_domains": [ 00:14:47.883 { 00:14:47.883 "dma_device_id": "system", 00:14:47.883 "dma_device_type": 1 00:14:47.883 } 00:14:47.883 ], 00:14:47.883 "driver_specific": { 00:14:47.883 "nvme": [ 00:14:47.883 { 00:14:47.883 "trid": { 00:14:47.883 "trtype": "TCP", 00:14:47.883 "adrfam": "IPv4", 00:14:47.883 "traddr": "10.0.0.2", 00:14:47.883 "trsvcid": "4420", 00:14:47.883 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:47.883 }, 00:14:47.883 "ctrlr_data": { 00:14:47.883 "cntlid": 1, 00:14:47.883 "vendor_id": "0x8086", 00:14:47.883 "model_number": "SPDK bdev Controller", 00:14:47.883 "serial_number": "SPDK0", 00:14:47.883 "firmware_revision": "24.09", 00:14:47.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:47.883 "oacs": { 00:14:47.883 "security": 0, 00:14:47.883 "format": 0, 00:14:47.883 "firmware": 0, 00:14:47.883 "ns_manage": 0 00:14:47.883 }, 00:14:47.883 "multi_ctrlr": true, 00:14:47.883 "ana_reporting": false 00:14:47.883 }, 00:14:47.883 "vs": { 00:14:47.883 "nvme_version": "1.3" 00:14:47.883 }, 00:14:47.883 "ns_data": { 00:14:47.883 "id": 1, 00:14:47.883 "can_share": true 00:14:47.883 } 00:14:47.883 } 00:14:47.883 ], 00:14:47.883 "mp_policy": "active_passive" 00:14:47.883 } 00:14:47.883 } 00:14:47.883 ] 00:14:47.883 09:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1065792 00:14:47.883 09:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:47.883 09:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:47.883 Running I/O for 10 seconds... 00:14:48.828 Latency(us) 00:14:48.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.828 Nvme0n1 : 1.00 17864.00 69.78 0.00 0.00 0.00 0.00 0.00 00:14:48.828 =================================================================================================================== 00:14:48.828 Total : 17864.00 69.78 0.00 0.00 0.00 0.00 0.00 00:14:48.828 00:14:49.770 09:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 85ba4870-c522-4415-a43b-406f6df66550 00:14:50.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.030 Nvme0n1 : 2.00 18048.50 70.50 0.00 0.00 0.00 0.00 0.00 00:14:50.030 =================================================================================================================== 00:14:50.030 Total : 18048.50 70.50 0.00 0.00 0.00 0.00 0.00 00:14:50.030 00:14:50.030 true 00:14:50.030 09:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba4870-c522-4415-a43b-406f6df66550 00:14:50.030 09:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:50.290 09:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:50.290 09:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:50.290 09:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1065792 00:14:50.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.861 Nvme0n1 : 3.00 18112.00 70.75 0.00 0.00 0.00 0.00 0.00 00:14:50.861 =================================================================================================================== 00:14:50.861 Total : 18112.00 70.75 0.00 0.00 0.00 0.00 0.00 00:14:50.861 00:14:52.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.247 Nvme0n1 : 4.00 18148.00 70.89 0.00 0.00 0.00 0.00 0.00 00:14:52.247 =================================================================================================================== 00:14:52.247 Total : 18148.00 70.89 0.00 0.00 0.00 0.00 0.00 00:14:52.247 00:14:53.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.188 Nvme0n1 : 5.00 18179.20 71.01 0.00 0.00 0.00 0.00 0.00 00:14:53.188 =================================================================================================================== 00:14:53.188 Total : 18179.20 71.01 0.00 0.00 0.00 0.00 0.00 00:14:53.188 00:14:54.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.130 Nvme0n1 : 6.00 18207.17 71.12 0.00 0.00 0.00 0.00 0.00 00:14:54.130 =================================================================================================================== 00:14:54.130 Total : 18207.17 71.12 0.00 0.00 0.00 0.00 0.00 00:14:54.130 00:14:55.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.073 Nvme0n1 : 7.00 18221.00 71.18 0.00 0.00 0.00 0.00 0.00 00:14:55.073 =================================================================================================================== 00:14:55.073 Total : 18221.00 71.18 0.00 0.00 0.00 0.00 0.00 00:14:55.073 00:14:56.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.015 Nvme0n1 : 8.00 18231.25 71.22 0.00 0.00 0.00 0.00 0.00 00:14:56.015 =================================================================================================================== 00:14:56.015 Total : 18231.25 71.22 0.00 0.00 0.00 0.00 0.00 00:14:56.015 00:14:56.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.957 Nvme0n1 : 9.00 18247.67 71.28 0.00 0.00 0.00 0.00 0.00 00:14:56.957 =================================================================================================================== 00:14:56.957 Total : 18247.67 71.28 0.00 0.00 0.00 0.00 0.00 00:14:56.957 00:14:57.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.899 Nvme0n1 : 10.00 18264.40 71.35 0.00 0.00 0.00 0.00 0.00 00:14:57.900 =================================================================================================================== 00:14:57.900 Total : 18264.40 71.35 0.00 0.00 0.00 0.00 0.00 00:14:57.900 00:14:57.900 00:14:57.900 Latency(us) 00:14:57.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.900 Nvme0n1 : 10.01 18263.70 71.34 0.00 0.00 7004.87 4259.84 12997.97 00:14:57.900 =================================================================================================================== 00:14:57.900 Total : 18263.70 71.34 0.00 0.00 7004.87 4259.84 12997.97 00:14:57.900 0 00:14:57.900 09:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1065540 00:14:57.900 09:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 1065540 ']' 00:14:57.900 09:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 1065540 00:14:57.900 09:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:14:57.900 09:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:57.900 09:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1065540 00:14:58.160 09:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:58.160 09:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:58.160 09:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1065540' 00:14:58.160 killing process with pid 1065540 00:14:58.160 09:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 1065540 00:14:58.160 Received shutdown signal, test time was about 10.000000 seconds 00:14:58.160 00:14:58.160 Latency(us) 00:14:58.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.160 =================================================================================================================== 00:14:58.160 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:58.160 09:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 1065540 00:14:58.160 09:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:58.421 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:58.683 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba4870-c522-4415-a43b-406f6df66550 00:14:58.683 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:58.944 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:58.944 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:58.944 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1058472 00:14:58.944 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1058472 00:14:58.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1058472 Killed "${NVMF_APP[@]}" "$@" 00:14:58.944 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:58.944 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:58.944 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:58.944 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:58.944 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:58.944 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1069020 00:14:58.944 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1069020 00:14:58.944 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:58.944 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 1069020 ']' 00:14:58.945 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.945 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:58.945 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.945 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:58.945 09:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:58.945 [2024-06-11 09:29:30.634842] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:14:58.945 [2024-06-11 09:29:30.634896] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.945 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.945 [2024-06-11 09:29:30.718794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.206 [2024-06-11 09:29:30.783753] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.206 [2024-06-11 09:29:30.783788] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.206 [2024-06-11 09:29:30.783800] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.206 [2024-06-11 09:29:30.783806] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.206 [2024-06-11 09:29:30.783812] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.206 [2024-06-11 09:29:30.783835] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.778 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:59.778 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:14:59.778 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:59.778 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:59.778 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:59.778 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.778 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:00.038 [2024-06-11 09:29:31.717122] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:00.038 [2024-06-11 09:29:31.717210] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:00.038 [2024-06-11 09:29:31.717239] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:00.038 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:00.038 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c720fdde-2d8e-4420-ab86-f2036de56459 00:15:00.038 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=c720fdde-2d8e-4420-ab86-f2036de56459 00:15:00.038 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:00.038 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:15:00.038 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:00.038 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:00.038 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:00.298 09:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c720fdde-2d8e-4420-ab86-f2036de56459 -t 2000 00:15:00.559 [ 00:15:00.559 { 00:15:00.559 "name": "c720fdde-2d8e-4420-ab86-f2036de56459", 00:15:00.559 "aliases": [ 00:15:00.559 "lvs/lvol" 00:15:00.559 ], 00:15:00.559 "product_name": "Logical Volume", 00:15:00.559 "block_size": 4096, 00:15:00.559 "num_blocks": 38912, 00:15:00.559 "uuid": "c720fdde-2d8e-4420-ab86-f2036de56459", 00:15:00.559 "assigned_rate_limits": { 00:15:00.559 "rw_ios_per_sec": 0, 00:15:00.559 "rw_mbytes_per_sec": 0, 00:15:00.559 "r_mbytes_per_sec": 0, 00:15:00.559 "w_mbytes_per_sec": 0 00:15:00.559 }, 00:15:00.559 "claimed": false, 00:15:00.559 "zoned": false, 00:15:00.559 "supported_io_types": { 00:15:00.559 "read": true, 00:15:00.559 "write": true, 00:15:00.559 "unmap": true, 00:15:00.559 "write_zeroes": true, 00:15:00.559 "flush": false, 00:15:00.559 "reset": true, 00:15:00.559 "compare": false, 00:15:00.559 "compare_and_write": false, 00:15:00.559 "abort": false, 00:15:00.559 "nvme_admin": false, 00:15:00.559 "nvme_io": false 00:15:00.559 }, 00:15:00.559 "driver_specific": { 00:15:00.559 "lvol": { 00:15:00.559 "lvol_store_uuid": "85ba4870-c522-4415-a43b-406f6df66550", 00:15:00.559 "base_bdev": "aio_bdev", 00:15:00.559 "thin_provision": false, 00:15:00.559 "num_allocated_clusters": 38, 00:15:00.559 "snapshot": false, 00:15:00.559 "clone": false, 00:15:00.559 "esnap_clone": false 00:15:00.559 } 00:15:00.559 } 00:15:00.559 } 00:15:00.559 ] 00:15:00.559 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:15:00.559 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba4870-c522-4415-a43b-406f6df66550 00:15:00.559 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:00.559 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:00.559 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba4870-c522-4415-a43b-406f6df66550 00:15:00.559 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:00.820 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:00.820 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:01.081 [2024-06-11 09:29:32.733665] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:01.081 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba4870-c522-4415-a43b-406f6df66550 00:15:01.081 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:15:01.081 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba4870-c522-4415-a43b-406f6df66550 00:15:01.081 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.081 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:01.081 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.081 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:01.081 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.081 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:01.081 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.081 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:01.081 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba4870-c522-4415-a43b-406f6df66550 00:15:01.341 request: 00:15:01.341 { 00:15:01.341 "uuid": "85ba4870-c522-4415-a43b-406f6df66550", 00:15:01.341 "method": "bdev_lvol_get_lvstores", 00:15:01.341 "req_id": 1 00:15:01.341 } 00:15:01.341 Got JSON-RPC error response 00:15:01.341 response: 00:15:01.341 { 00:15:01.341 "code": -19, 00:15:01.341 "message": "No such device" 00:15:01.341 } 00:15:01.341 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:15:01.341 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:01.341 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:01.341 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:01.342 09:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:01.603 aio_bdev 00:15:01.603 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c720fdde-2d8e-4420-ab86-f2036de56459 00:15:01.603 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=c720fdde-2d8e-4420-ab86-f2036de56459 00:15:01.603 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:01.603 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:15:01.603 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:01.603 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:01.603 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:01.603 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c720fdde-2d8e-4420-ab86-f2036de56459 -t 2000 00:15:01.863 [ 00:15:01.863 { 00:15:01.863 "name": "c720fdde-2d8e-4420-ab86-f2036de56459", 00:15:01.863 "aliases": [ 00:15:01.863 "lvs/lvol" 00:15:01.863 ], 00:15:01.863 "product_name": "Logical Volume", 00:15:01.863 "block_size": 4096, 00:15:01.863 "num_blocks": 38912, 00:15:01.863 "uuid": "c720fdde-2d8e-4420-ab86-f2036de56459", 00:15:01.863 "assigned_rate_limits": { 00:15:01.863 "rw_ios_per_sec": 0, 00:15:01.863 "rw_mbytes_per_sec": 0, 00:15:01.863 "r_mbytes_per_sec": 0, 00:15:01.863 "w_mbytes_per_sec": 0 00:15:01.863 }, 00:15:01.863 "claimed": false, 00:15:01.863 "zoned": false, 00:15:01.864 "supported_io_types": { 00:15:01.864 "read": true, 00:15:01.864 "write": true, 00:15:01.864 "unmap": true, 00:15:01.864 "write_zeroes": true, 00:15:01.864 "flush": false, 00:15:01.864 "reset": true, 00:15:01.864 "compare": false, 00:15:01.864 "compare_and_write": false, 00:15:01.864 "abort": false, 00:15:01.864 "nvme_admin": false, 00:15:01.864 "nvme_io": false 00:15:01.864 }, 00:15:01.864 "driver_specific": { 00:15:01.864 "lvol": { 00:15:01.864 "lvol_store_uuid": "85ba4870-c522-4415-a43b-406f6df66550", 00:15:01.864 "base_bdev": "aio_bdev", 00:15:01.864 "thin_provision": false, 00:15:01.864 "num_allocated_clusters": 38, 00:15:01.864 "snapshot": false, 00:15:01.864 "clone": false, 00:15:01.864 "esnap_clone": false 00:15:01.864 } 00:15:01.864 } 00:15:01.864 } 00:15:01.864 ] 00:15:01.864 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:15:01.864 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba4870-c522-4415-a43b-406f6df66550 00:15:01.864 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:02.124 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:02.124 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:02.124 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba4870-c522-4415-a43b-406f6df66550 00:15:02.124 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:02.385 09:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c720fdde-2d8e-4420-ab86-f2036de56459 00:15:02.385 09:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85ba4870-c522-4415-a43b-406f6df66550 00:15:02.646 09:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:02.907 00:15:02.907 real 0m17.913s 00:15:02.907 user 0m47.119s 00:15:02.907 sys 0m3.083s 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:02.907 ************************************ 00:15:02.907 END TEST lvs_grow_dirty 00:15:02.907 ************************************ 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:02.907 nvmf_trace.0 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:02.907 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:02.907 rmmod nvme_tcp 00:15:02.907 rmmod nvme_fabrics 00:15:02.907 rmmod nvme_keyring 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1069020 ']' 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1069020 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 1069020 ']' 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 1069020 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1069020 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1069020' 00:15:03.195 killing process with pid 1069020 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 1069020 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 1069020 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.195 09:29:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.742 09:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:05.742 00:15:05.742 real 0m45.907s 00:15:05.742 user 1m10.443s 00:15:05.742 sys 0m10.432s 00:15:05.742 09:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:05.742 09:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:05.742 ************************************ 00:15:05.742 END TEST nvmf_lvs_grow 00:15:05.742 ************************************ 00:15:05.742 09:29:37 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:05.742 09:29:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:05.742 09:29:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:05.742 09:29:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:05.742 ************************************ 00:15:05.742 START TEST nvmf_bdev_io_wait 00:15:05.742 ************************************ 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:05.742 * Looking for test storage... 00:15:05.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.742 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:05.743 09:29:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:12.331 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:12.331 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:12.331 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:12.331 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:12.331 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.332 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.332 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:12.332 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:12.332 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.332 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:12.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:15:12.592 00:15:12.592 --- 10.0.0.2 ping statistics --- 00:15:12.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.592 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.470 ms 00:15:12.592 00:15:12.592 --- 10.0.0.1 ping statistics --- 00:15:12.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.592 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1074795 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1074795 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 1074795 ']' 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:12.592 09:29:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:12.853 [2024-06-11 09:29:44.452056] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:15:12.853 [2024-06-11 09:29:44.452124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.853 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.853 [2024-06-11 09:29:44.540353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.853 [2024-06-11 09:29:44.638077] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.853 [2024-06-11 09:29:44.638115] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.853 [2024-06-11 09:29:44.638123] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.853 [2024-06-11 09:29:44.638129] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.853 [2024-06-11 09:29:44.638135] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.853 [2024-06-11 09:29:44.638246] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.853 [2024-06-11 09:29:44.638365] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.853 [2024-06-11 09:29:44.638523] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.853 [2024-06-11 09:29:44.638524] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:13.795 [2024-06-11 09:29:45.438326] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:13.795 Malloc0 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:13.795 [2024-06-11 09:29:45.505630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1075056 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1075059 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:13.795 { 00:15:13.795 "params": { 00:15:13.795 "name": "Nvme$subsystem", 00:15:13.795 "trtype": "$TEST_TRANSPORT", 00:15:13.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.795 "adrfam": "ipv4", 00:15:13.795 "trsvcid": "$NVMF_PORT", 00:15:13.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.795 "hdgst": ${hdgst:-false}, 00:15:13.795 "ddgst": ${ddgst:-false} 00:15:13.795 }, 00:15:13.795 "method": "bdev_nvme_attach_controller" 00:15:13.795 } 00:15:13.795 EOF 00:15:13.795 )") 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1075061 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:13.795 { 00:15:13.795 "params": { 00:15:13.795 "name": "Nvme$subsystem", 00:15:13.795 "trtype": "$TEST_TRANSPORT", 00:15:13.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.795 "adrfam": "ipv4", 00:15:13.795 "trsvcid": "$NVMF_PORT", 00:15:13.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.795 "hdgst": ${hdgst:-false}, 00:15:13.795 "ddgst": ${ddgst:-false} 00:15:13.795 }, 00:15:13.795 "method": "bdev_nvme_attach_controller" 00:15:13.795 } 00:15:13.795 EOF 00:15:13.795 )") 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1075065 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:13.795 { 00:15:13.795 "params": { 00:15:13.795 "name": "Nvme$subsystem", 00:15:13.795 "trtype": "$TEST_TRANSPORT", 00:15:13.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.795 "adrfam": "ipv4", 00:15:13.795 "trsvcid": "$NVMF_PORT", 00:15:13.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.795 "hdgst": ${hdgst:-false}, 00:15:13.795 "ddgst": ${ddgst:-false} 00:15:13.795 }, 00:15:13.795 "method": "bdev_nvme_attach_controller" 00:15:13.795 } 00:15:13.795 EOF 00:15:13.795 )") 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:13.795 { 00:15:13.795 "params": { 00:15:13.795 "name": "Nvme$subsystem", 00:15:13.795 "trtype": "$TEST_TRANSPORT", 00:15:13.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.795 "adrfam": "ipv4", 00:15:13.795 "trsvcid": "$NVMF_PORT", 00:15:13.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.795 "hdgst": ${hdgst:-false}, 00:15:13.795 "ddgst": ${ddgst:-false} 00:15:13.795 }, 00:15:13.795 "method": "bdev_nvme_attach_controller" 00:15:13.795 } 00:15:13.795 EOF 00:15:13.795 )") 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1075056 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:13.795 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:13.795 "params": { 00:15:13.795 "name": "Nvme1", 00:15:13.795 "trtype": "tcp", 00:15:13.796 "traddr": "10.0.0.2", 00:15:13.796 "adrfam": "ipv4", 00:15:13.796 "trsvcid": "4420", 00:15:13.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.796 "hdgst": false, 00:15:13.796 "ddgst": false 00:15:13.796 }, 00:15:13.796 "method": "bdev_nvme_attach_controller" 00:15:13.796 }' 00:15:13.796 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:13.796 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:13.796 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:13.796 "params": { 00:15:13.796 "name": "Nvme1", 00:15:13.796 "trtype": "tcp", 00:15:13.796 "traddr": "10.0.0.2", 00:15:13.796 "adrfam": "ipv4", 00:15:13.796 "trsvcid": "4420", 00:15:13.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.796 "hdgst": false, 00:15:13.796 "ddgst": false 00:15:13.796 }, 00:15:13.796 "method": "bdev_nvme_attach_controller" 00:15:13.796 }' 00:15:13.796 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:13.796 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:13.796 "params": { 00:15:13.796 "name": "Nvme1", 00:15:13.796 "trtype": "tcp", 00:15:13.796 "traddr": "10.0.0.2", 00:15:13.796 "adrfam": "ipv4", 00:15:13.796 "trsvcid": "4420", 00:15:13.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.796 "hdgst": false, 00:15:13.796 "ddgst": false 00:15:13.796 }, 00:15:13.796 "method": "bdev_nvme_attach_controller" 00:15:13.796 }' 00:15:13.796 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:13.796 09:29:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:13.796 "params": { 00:15:13.796 "name": "Nvme1", 00:15:13.796 "trtype": "tcp", 00:15:13.796 "traddr": "10.0.0.2", 00:15:13.796 "adrfam": "ipv4", 00:15:13.796 "trsvcid": "4420", 00:15:13.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.796 "hdgst": false, 00:15:13.796 "ddgst": false 00:15:13.796 }, 00:15:13.796 "method": "bdev_nvme_attach_controller" 00:15:13.796 }' 00:15:13.796 [2024-06-11 09:29:45.558118] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:15:13.796 [2024-06-11 09:29:45.558166] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:13.796 [2024-06-11 09:29:45.562545] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:15:13.796 [2024-06-11 09:29:45.562593] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:13.796 [2024-06-11 09:29:45.563033] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:15:13.796 [2024-06-11 09:29:45.563076] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:13.796 [2024-06-11 09:29:45.571651] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:15:13.796 [2024-06-11 09:29:45.571709] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:13.796 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.056 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.056 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.056 [2024-06-11 09:29:45.699052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.056 [2024-06-11 09:29:45.743171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.056 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.056 [2024-06-11 09:29:45.751142] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:15:14.056 [2024-06-11 09:29:45.792361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.056 [2024-06-11 09:29:45.793405] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:15:14.056 [2024-06-11 09:29:45.840445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.057 [2024-06-11 09:29:45.842758] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:15:14.317 [2024-06-11 09:29:45.889847] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:15:14.317 Running I/O for 1 seconds... 00:15:14.317 Running I/O for 1 seconds... 00:15:14.317 Running I/O for 1 seconds... 00:15:14.578 Running I/O for 1 seconds... 00:15:15.520 00:15:15.520 Latency(us) 00:15:15.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.520 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:15.520 Nvme1n1 : 1.00 18771.60 73.33 0.00 0.00 6799.41 4478.29 12997.97 00:15:15.520 =================================================================================================================== 00:15:15.520 Total : 18771.60 73.33 0.00 0.00 6799.41 4478.29 12997.97 00:15:15.520 00:15:15.520 Latency(us) 00:15:15.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.520 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:15.520 Nvme1n1 : 1.01 11858.19 46.32 0.00 0.00 10759.49 5352.11 20753.07 00:15:15.520 =================================================================================================================== 00:15:15.520 Total : 11858.19 46.32 0.00 0.00 10759.49 5352.11 20753.07 00:15:15.520 00:15:15.520 Latency(us) 00:15:15.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.520 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:15.520 Nvme1n1 : 1.00 186079.43 726.87 0.00 0.00 684.83 271.36 785.07 00:15:15.521 =================================================================================================================== 00:15:15.521 Total : 186079.43 726.87 0.00 0.00 684.83 271.36 785.07 00:15:15.521 00:15:15.521 Latency(us) 00:15:15.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.521 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:15.521 Nvme1n1 : 1.00 13227.38 51.67 0.00 0.00 9647.33 2607.79 15728.64 00:15:15.521 =================================================================================================================== 00:15:15.521 Total : 13227.38 51.67 0.00 0.00 9647.33 2607.79 15728.64 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1075059 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1075061 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1075065 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.521 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.521 rmmod nvme_tcp 00:15:15.781 rmmod nvme_fabrics 00:15:15.781 rmmod nvme_keyring 00:15:15.781 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.781 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:15.781 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:15.781 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1074795 ']' 00:15:15.781 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1074795 00:15:15.781 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 1074795 ']' 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 1074795 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1074795 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1074795' 00:15:15.782 killing process with pid 1074795 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 1074795 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 1074795 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.782 09:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.329 09:29:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:18.329 00:15:18.329 real 0m12.587s 00:15:18.329 user 0m18.795s 00:15:18.329 sys 0m7.005s 00:15:18.329 09:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:18.329 09:29:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:18.329 ************************************ 00:15:18.329 END TEST nvmf_bdev_io_wait 00:15:18.329 ************************************ 00:15:18.329 09:29:49 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:18.329 09:29:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:18.329 09:29:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:18.329 09:29:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:18.329 ************************************ 00:15:18.329 START TEST nvmf_queue_depth 00:15:18.329 ************************************ 00:15:18.329 09:29:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:18.329 * Looking for test storage... 00:15:18.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:18.329 09:29:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:18.329 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:18.329 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.329 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.329 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:18.330 09:29:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:24.935 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:24.936 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:24.936 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:24.936 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:24.936 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:24.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:15:24.936 00:15:24.936 --- 10.0.0.2 ping statistics --- 00:15:24.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.936 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:24.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:15:24.936 00:15:24.936 --- 10.0.0.1 ping statistics --- 00:15:24.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.936 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1080084 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1080084 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 1080084 ']' 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:24.936 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:24.936 [2024-06-11 09:29:56.688843] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:15:24.937 [2024-06-11 09:29:56.688890] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.937 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.197 [2024-06-11 09:29:56.755150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.197 [2024-06-11 09:29:56.818195] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.197 [2024-06-11 09:29:56.818232] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.197 [2024-06-11 09:29:56.818239] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.197 [2024-06-11 09:29:56.818245] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.197 [2024-06-11 09:29:56.818251] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.197 [2024-06-11 09:29:56.818272] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:25.197 [2024-06-11 09:29:56.950918] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:25.197 Malloc0 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.197 09:29:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:25.197 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.197 09:29:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.197 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.197 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:25.197 [2024-06-11 09:29:57.009564] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.457 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.457 09:29:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1080141 00:15:25.457 09:29:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:25.457 09:29:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:25.457 09:29:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1080141 /var/tmp/bdevperf.sock 00:15:25.457 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 1080141 ']' 00:15:25.457 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:25.457 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:25.457 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:25.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:25.457 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:25.457 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:25.457 [2024-06-11 09:29:57.061574] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:15:25.457 [2024-06-11 09:29:57.061621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080141 ] 00:15:25.457 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.457 [2024-06-11 09:29:57.136193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.457 [2024-06-11 09:29:57.200517] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.718 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:25.718 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:15:25.718 09:29:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:25.718 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.718 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:25.718 NVMe0n1 00:15:25.718 09:29:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.718 09:29:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:25.979 Running I/O for 10 seconds... 00:15:35.982 00:15:35.982 Latency(us) 00:15:35.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.982 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:35.983 Verification LBA range: start 0x0 length 0x4000 00:15:35.983 NVMe0n1 : 10.08 9405.53 36.74 0.00 0.00 108354.40 24685.23 75584.85 00:15:35.983 =================================================================================================================== 00:15:35.983 Total : 9405.53 36.74 0.00 0.00 108354.40 24685.23 75584.85 00:15:35.983 0 00:15:35.983 09:30:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1080141 00:15:35.983 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 1080141 ']' 00:15:35.983 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 1080141 00:15:35.983 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:15:35.983 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:35.983 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1080141 00:15:35.983 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:35.983 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:35.983 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1080141' 00:15:35.983 killing process with pid 1080141 00:15:35.983 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 1080141 00:15:35.983 Received shutdown signal, test time was about 10.000000 seconds 00:15:35.983 00:15:35.983 Latency(us) 00:15:35.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.983 =================================================================================================================== 00:15:35.983 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:35.983 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 1080141 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:36.243 rmmod nvme_tcp 00:15:36.243 rmmod nvme_fabrics 00:15:36.243 rmmod nvme_keyring 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1080084 ']' 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1080084 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 1080084 ']' 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 1080084 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:36.243 09:30:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1080084 00:15:36.243 09:30:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:36.243 09:30:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:36.243 09:30:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1080084' 00:15:36.243 killing process with pid 1080084 00:15:36.243 09:30:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 1080084 00:15:36.243 09:30:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 1080084 00:15:36.504 09:30:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:36.504 09:30:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:36.504 09:30:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:36.504 09:30:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.504 09:30:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:36.504 09:30:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.504 09:30:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.504 09:30:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.447 09:30:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:38.447 00:15:38.447 real 0m20.505s 00:15:38.447 user 0m23.627s 00:15:38.447 sys 0m6.289s 00:15:38.447 09:30:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:38.447 09:30:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:38.447 ************************************ 00:15:38.447 END TEST nvmf_queue_depth 00:15:38.447 ************************************ 00:15:38.708 09:30:10 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:38.708 09:30:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:38.708 09:30:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:38.708 09:30:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:38.708 ************************************ 00:15:38.708 START TEST nvmf_target_multipath 00:15:38.708 ************************************ 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:38.708 * Looking for test storage... 00:15:38.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:38.708 09:30:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:46.850 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:46.850 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:46.850 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:46.850 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:46.850 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:46.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:15:46.851 00:15:46.851 --- 10.0.0.2 ping statistics --- 00:15:46.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.851 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:46.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:15:46.851 00:15:46.851 --- 10.0.0.1 ping statistics --- 00:15:46.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.851 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:46.851 only one NIC for nvmf test 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.851 rmmod nvme_tcp 00:15:46.851 rmmod nvme_fabrics 00:15:46.851 rmmod nvme_keyring 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.851 09:30:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:48.235 00:15:48.235 real 0m9.462s 00:15:48.235 user 0m2.038s 00:15:48.235 sys 0m5.342s 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:48.235 09:30:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:48.235 ************************************ 00:15:48.235 END TEST nvmf_target_multipath 00:15:48.235 ************************************ 00:15:48.235 09:30:19 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:48.235 09:30:19 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:48.235 09:30:19 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:48.235 09:30:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:48.235 ************************************ 00:15:48.235 START TEST nvmf_zcopy 00:15:48.235 ************************************ 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:48.235 * Looking for test storage... 00:15:48.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.235 09:30:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:48.236 09:30:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:56.380 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:56.380 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:56.380 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:56.380 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.380 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:56.381 09:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:56.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:15:56.381 00:15:56.381 --- 10.0.0.2 ping statistics --- 00:15:56.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.381 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:15:56.381 00:15:56.381 --- 10.0.0.1 ping statistics --- 00:15:56.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.381 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1092367 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1092367 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 1092367 ']' 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:56.381 09:30:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:56.381 [2024-06-11 09:30:27.220130] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:15:56.381 [2024-06-11 09:30:27.220194] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.381 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.381 [2024-06-11 09:30:27.288985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.381 [2024-06-11 09:30:27.362149] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.381 [2024-06-11 09:30:27.362189] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.381 [2024-06-11 09:30:27.362196] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.381 [2024-06-11 09:30:27.362202] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.381 [2024-06-11 09:30:27.362208] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.381 [2024-06-11 09:30:27.362226] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:56.381 [2024-06-11 09:30:28.129335] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:56.381 [2024-06-11 09:30:28.153509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:56.381 malloc0 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:56.381 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:56.642 09:30:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:56.642 09:30:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:56.642 09:30:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:56.642 09:30:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:56.642 09:30:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:56.642 09:30:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:56.642 09:30:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:56.642 { 00:15:56.642 "params": { 00:15:56.642 "name": "Nvme$subsystem", 00:15:56.642 "trtype": "$TEST_TRANSPORT", 00:15:56.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:56.642 "adrfam": "ipv4", 00:15:56.642 "trsvcid": "$NVMF_PORT", 00:15:56.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:56.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:56.642 "hdgst": ${hdgst:-false}, 00:15:56.642 "ddgst": ${ddgst:-false} 00:15:56.642 }, 00:15:56.642 "method": "bdev_nvme_attach_controller" 00:15:56.642 } 00:15:56.642 EOF 00:15:56.642 )") 00:15:56.642 09:30:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:56.642 09:30:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:56.642 09:30:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:56.642 09:30:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:56.642 "params": { 00:15:56.642 "name": "Nvme1", 00:15:56.642 "trtype": "tcp", 00:15:56.642 "traddr": "10.0.0.2", 00:15:56.642 "adrfam": "ipv4", 00:15:56.642 "trsvcid": "4420", 00:15:56.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:56.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:56.642 "hdgst": false, 00:15:56.642 "ddgst": false 00:15:56.642 }, 00:15:56.642 "method": "bdev_nvme_attach_controller" 00:15:56.642 }' 00:15:56.642 [2024-06-11 09:30:28.246513] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:15:56.642 [2024-06-11 09:30:28.246560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092640 ] 00:15:56.642 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.642 [2024-06-11 09:30:28.323142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.642 [2024-06-11 09:30:28.388034] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.903 Running I/O for 10 seconds... 00:16:09.139 00:16:09.139 Latency(us) 00:16:09.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.140 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:09.140 Verification LBA range: start 0x0 length 0x1000 00:16:09.140 Nvme1n1 : 10.01 6732.67 52.60 0.00 0.00 18952.77 856.75 30583.47 00:16:09.140 =================================================================================================================== 00:16:09.140 Total : 6732.67 52.60 0.00 0.00 18952.77 856.75 30583.47 00:16:09.140 09:30:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1095083 00:16:09.140 09:30:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:09.140 09:30:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:09.140 09:30:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:09.140 09:30:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:09.140 09:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:09.140 09:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:09.140 09:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:09.140 09:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:09.140 { 00:16:09.140 "params": { 00:16:09.140 "name": "Nvme$subsystem", 00:16:09.140 "trtype": "$TEST_TRANSPORT", 00:16:09.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:09.140 "adrfam": "ipv4", 00:16:09.140 "trsvcid": "$NVMF_PORT", 00:16:09.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:09.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:09.140 "hdgst": ${hdgst:-false}, 00:16:09.140 "ddgst": ${ddgst:-false} 00:16:09.140 }, 00:16:09.140 "method": "bdev_nvme_attach_controller" 00:16:09.140 } 00:16:09.140 EOF 00:16:09.140 )") 00:16:09.140 [2024-06-11 09:30:38.861647] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:38.861679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 09:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:09.140 09:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:09.140 09:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:09.140 09:30:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:09.140 "params": { 00:16:09.140 "name": "Nvme1", 00:16:09.140 "trtype": "tcp", 00:16:09.140 "traddr": "10.0.0.2", 00:16:09.140 "adrfam": "ipv4", 00:16:09.140 "trsvcid": "4420", 00:16:09.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:09.140 "hdgst": false, 00:16:09.140 "ddgst": false 00:16:09.140 }, 00:16:09.140 "method": "bdev_nvme_attach_controller" 00:16:09.140 }' 00:16:09.140 [2024-06-11 09:30:38.873647] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:38.873660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:38.885679] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:38.885690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:38.897708] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:38.897718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:38.902179] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:16:09.140 [2024-06-11 09:30:38.902226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095083 ] 00:16:09.140 [2024-06-11 09:30:38.909739] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:38.909750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:38.921771] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:38.921781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.140 [2024-06-11 09:30:38.933803] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:38.933814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:38.945836] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:38.945846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:38.957867] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:38.957879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:38.969898] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:38.969910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:38.977205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.140 [2024-06-11 09:30:38.981929] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:38.981945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:38.993961] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:38.993972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.005993] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.006006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.018027] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.018042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.030060] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.030075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.041327] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.140 [2024-06-11 09:30:39.042093] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.042104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.054128] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.054142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.066161] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.066176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.078191] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.078202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.090223] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.090236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.102255] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.102266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.114327] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.114345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.126333] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.126347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.138358] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.138371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.150385] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.150395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.162414] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.162425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.174447] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.174457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.186481] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.186494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.198514] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.198531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.210547] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.210557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.222580] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.222591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.234614] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.234627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.246643] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.246656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.258676] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.258686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.270707] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.270718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.282741] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.282754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.294771] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.294782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.306804] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.306815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.318837] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.318848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.330871] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.330883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.342915] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.342933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 Running I/O for 5 seconds... 00:16:09.140 [2024-06-11 09:30:39.359930] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.359950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.376047] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.376066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.392838] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.392858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.410067] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.410087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.425931] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.425950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.437151] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.437170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.453610] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.453633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.470512] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.470532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.488218] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.488237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.505562] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.505581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.522969] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.522987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.539931] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.539949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.555854] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.555873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.566814] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.566832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.584589] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.584608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.599368] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.599386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.616018] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.616036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.631696] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.631714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.649068] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.649086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.665572] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.665590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.683207] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.140 [2024-06-11 09:30:39.683225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.140 [2024-06-11 09:30:39.699053] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.699071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.716500] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.716519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.731526] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.731544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.748465] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.748483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.763120] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.763142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.780582] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.780601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.794988] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.795006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.811913] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.811932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.827002] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.827021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.842203] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.842222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.853646] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.853664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.870184] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.870202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.887024] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.887043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.904771] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.904790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.920986] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.921004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.932641] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.932659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.949440] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.949458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.966892] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.966911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:39.984454] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:39.984472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.001646] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.001664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.018028] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.018048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.029848] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.029877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.046418] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.046443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.064134] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.064158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.081337] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.081365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.099686] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.099706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.114737] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.114756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.130486] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.130506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.147543] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.147562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.163526] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.163545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.174938] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.174956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.192553] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.192571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.208158] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.208176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.225295] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.225313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.242092] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.242109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.259924] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.259942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.275388] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.275407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.290940] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.290960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.302277] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.302296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.319356] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.319374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.336633] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.336651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.354035] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.354053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.371377] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.371395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.388655] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.388673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.405931] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.405949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.422719] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.422737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.439527] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.439546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.455195] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.455213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.466550] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.466568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.483705] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.483723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.499467] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.499485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.510850] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.510868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.526626] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.526643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.543549] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.543567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.560041] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.560059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.577019] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.577037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.594926] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.594944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.612535] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.612553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.628454] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.628472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.645713] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.645730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.663226] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.663244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.678974] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.678992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.695945] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.695963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.713001] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.713020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.730172] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.730191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.747189] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.747208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.763884] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.763903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.780569] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.780588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.796072] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.796091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.807527] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.807545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.823811] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.823829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.839744] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.839763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.856700] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.856718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.872546] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.872565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.883758] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.883777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.900426] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.900444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.917869] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.917888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.933607] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.933626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.141 [2024-06-11 09:30:40.951385] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.141 [2024-06-11 09:30:40.951404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:40.967619] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:40.967638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:40.979065] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:40.979083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:40.995756] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:40.995775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:41.011939] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:41.011958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:41.022805] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:41.022824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:41.039475] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:41.039494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:41.054977] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:41.054996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:41.066280] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:41.066298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:41.083437] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:41.083455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:41.099024] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:41.099043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:41.110277] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:41.110295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:41.126787] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:41.126806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:41.143312] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:41.143336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:41.160571] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:41.160589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:41.177347] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:41.177366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:41.194721] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:41.194739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.403 [2024-06-11 09:30:41.212086] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.403 [2024-06-11 09:30:41.212104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.229216] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.229235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.244908] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.244927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.262192] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.262216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.277729] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.277749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.289177] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.289195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.305732] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.305751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.322929] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.322948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.339709] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.339728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.357445] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.357465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.373460] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.373479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.384874] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.384892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.401655] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.401673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.417089] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.417107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.434325] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.434343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.449639] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.449657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.461117] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.663 [2024-06-11 09:30:41.461135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.663 [2024-06-11 09:30:41.478027] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.664 [2024-06-11 09:30:41.478045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.495074] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.495092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.510933] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.510951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.528905] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.528923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.544746] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.544764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.556070] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.556092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.572090] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.572108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.589298] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.589320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.605053] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.605071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.621745] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.621763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.639247] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.639265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.656033] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.656051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.673527] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.673544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.689419] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.689436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.706630] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.706647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.925 [2024-06-11 09:30:41.723451] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.925 [2024-06-11 09:30:41.723469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.185 [2024-06-11 09:30:41.741080] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.185 [2024-06-11 09:30:41.741099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.185 [2024-06-11 09:30:41.756521] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.185 [2024-06-11 09:30:41.756539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.185 [2024-06-11 09:30:41.771912] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.185 [2024-06-11 09:30:41.771930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.185 [2024-06-11 09:30:41.782991] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.185 [2024-06-11 09:30:41.783008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.185 [2024-06-11 09:30:41.799590] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.185 [2024-06-11 09:30:41.799608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.185 [2024-06-11 09:30:41.815685] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.185 [2024-06-11 09:30:41.815703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.185 [2024-06-11 09:30:41.833835] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.185 [2024-06-11 09:30:41.833853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.185 [2024-06-11 09:30:41.849546] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.185 [2024-06-11 09:30:41.849564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.185 [2024-06-11 09:30:41.867723] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.185 [2024-06-11 09:30:41.867745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.185 [2024-06-11 09:30:41.883047] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.185 [2024-06-11 09:30:41.883065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 [2024-06-11 09:30:41.898372] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-06-11 09:30:41.898390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 [2024-06-11 09:30:41.915899] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-06-11 09:30:41.915917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 [2024-06-11 09:30:41.930555] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-06-11 09:30:41.930573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 [2024-06-11 09:30:41.947227] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-06-11 09:30:41.947246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 [2024-06-11 09:30:41.964241] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-06-11 09:30:41.964258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 [2024-06-11 09:30:41.981827] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-06-11 09:30:41.981845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 [2024-06-11 09:30:41.998177] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-06-11 09:30:41.998195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.016048] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.016067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.032293] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.032312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.043467] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.043486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.059754] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.059772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.076970] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.076988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.094167] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.094185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.112280] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.112299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.126652] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.126670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.142787] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.142805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.158724] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.158741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.176431] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.176453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.193477] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.193495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.210306] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.210328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.228009] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.228027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.243606] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.243624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.447 [2024-06-11 09:30:42.255022] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.447 [2024-06-11 09:30:42.255039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.272269] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.272288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.287337] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.287357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.304550] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.304568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.319467] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.319485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.336623] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.336642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.352224] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.352242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.369616] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.369635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.385413] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.385431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.403064] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.403083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.419202] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.419220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.436906] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.436925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.453627] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.453644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.471372] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.471391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.486879] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.486897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.503632] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.503650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.708 [2024-06-11 09:30:42.519852] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.708 [2024-06-11 09:30:42.519871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.537213] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.537231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.552821] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.552840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.564666] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.564685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.581364] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.581383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.598054] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.598072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.614705] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.614723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.632247] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.632265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.648136] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.648154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.665053] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.665071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.681781] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.681799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.699707] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.699726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.716986] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.717004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.733877] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.733895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.751475] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.751493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.767867] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.767885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.969 [2024-06-11 09:30:42.784638] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.969 [2024-06-11 09:30:42.784655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:42.802623] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:42.802642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:42.818785] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:42.818803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:42.829537] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:42.829555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:42.846343] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:42.846362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:42.862266] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:42.862284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:42.879557] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:42.879575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:42.896982] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:42.897000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:42.915050] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:42.915068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:42.930585] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:42.930603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:42.947666] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:42.947685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:42.962764] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:42.962784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:42.978457] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:42.978475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:42.995729] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:42.995747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:43.013101] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:43.013119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.230 [2024-06-11 09:30:43.028942] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.230 [2024-06-11 09:30:43.028960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.046754] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.046773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.064086] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.064105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.079522] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.079540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.090461] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.090479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.107379] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.107397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.123826] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.123845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.141067] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.141086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.158109] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.158128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.175461] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.175480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.192140] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.192158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.209712] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.209730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.227078] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.227096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.242987] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.243004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.260525] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.260543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.276305] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.276328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.491 [2024-06-11 09:30:43.293410] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.491 [2024-06-11 09:30:43.293428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.310586] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.310605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.327119] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.327138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.345201] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.345220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.360946] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.360964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.378716] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.378735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.394465] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.394483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.409810] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.409828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.421323] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.421342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.437997] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.438015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.455575] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.455593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.472214] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.472232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.489780] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.489798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.506675] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.506693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.523283] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.523301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.540027] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.540045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.753 [2024-06-11 09:30:43.557001] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.753 [2024-06-11 09:30:43.557018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.014 [2024-06-11 09:30:43.573500] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.014 [2024-06-11 09:30:43.573518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.014 [2024-06-11 09:30:43.591286] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.014 [2024-06-11 09:30:43.591304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.014 [2024-06-11 09:30:43.608740] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.014 [2024-06-11 09:30:43.608758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.015 [2024-06-11 09:30:43.626394] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.015 [2024-06-11 09:30:43.626413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.015 [2024-06-11 09:30:43.641697] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.015 [2024-06-11 09:30:43.641715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.015 [2024-06-11 09:30:43.657156] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.015 [2024-06-11 09:30:43.657174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.015 [2024-06-11 09:30:43.668728] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.015 [2024-06-11 09:30:43.668747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.015 [2024-06-11 09:30:43.685574] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.015 [2024-06-11 09:30:43.685592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.015 [2024-06-11 09:30:43.702117] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.015 [2024-06-11 09:30:43.702135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.015 [2024-06-11 09:30:43.719866] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.015 [2024-06-11 09:30:43.719892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.015 [2024-06-11 09:30:43.735397] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.015 [2024-06-11 09:30:43.735415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.015 [2024-06-11 09:30:43.746804] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.015 [2024-06-11 09:30:43.746822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.015 [2024-06-11 09:30:43.763372] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.015 [2024-06-11 09:30:43.763389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.015 [2024-06-11 09:30:43.778767] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.015 [2024-06-11 09:30:43.778785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.015 [2024-06-11 09:30:43.790061] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.015 [2024-06-11 09:30:43.790079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.015 [2024-06-11 09:30:43.806216] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.015 [2024-06-11 09:30:43.806234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.015 [2024-06-11 09:30:43.823233] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.015 [2024-06-11 09:30:43.823251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:43.839071] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:43.839090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:43.856414] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:43.856431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:43.873579] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:43.873596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:43.891423] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:43.891441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:43.907689] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:43.907706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:43.924252] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:43.924270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:43.941398] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:43.941415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:43.958448] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:43.958466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:43.975877] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:43.975894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:43.993229] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:43.993246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:44.010099] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:44.010117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:44.027362] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:44.027384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:44.044778] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:44.044796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:44.061934] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:44.061952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.277 [2024-06-11 09:30:44.078513] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.277 [2024-06-11 09:30:44.078531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.095751] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.095769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.113210] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.113228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.128308] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.128330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.145505] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.145523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.160791] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.160810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.172029] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.172047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.188790] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.188810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.204482] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.204501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.216537] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.216556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.233627] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.233646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.248832] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.248850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.259776] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.259795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.277267] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.277285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.292010] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.292029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.309288] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.309307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.325131] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.325154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.542 [2024-06-11 09:30:44.342780] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.542 [2024-06-11 09:30:44.342799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.833 [2024-06-11 09:30:44.358247] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.833 [2024-06-11 09:30:44.358266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.833 00:16:12.833 Latency(us) 00:16:12.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.833 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:12.833 Nvme1n1 : 5.01 13216.18 103.25 0.00 0.00 9675.01 4532.91 18786.99 00:16:12.833 =================================================================================================================== 00:16:12.833 Total : 13216.18 103.25 0.00 0.00 9675.01 4532.91 18786.99 00:16:12.833 [2024-06-11 09:30:44.370375] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.833 [2024-06-11 09:30:44.370394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.833 [2024-06-11 09:30:44.382406] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.833 [2024-06-11 09:30:44.382421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.833 [2024-06-11 09:30:44.394441] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.833 [2024-06-11 09:30:44.394454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.833 [2024-06-11 09:30:44.406472] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.833 [2024-06-11 09:30:44.406486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.833 [2024-06-11 09:30:44.418503] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.833 [2024-06-11 09:30:44.418516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.833 [2024-06-11 09:30:44.430533] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.833 [2024-06-11 09:30:44.430545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.833 [2024-06-11 09:30:44.442566] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.833 [2024-06-11 09:30:44.442576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.833 [2024-06-11 09:30:44.454601] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.833 [2024-06-11 09:30:44.454614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.833 [2024-06-11 09:30:44.466633] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.833 [2024-06-11 09:30:44.466646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.833 [2024-06-11 09:30:44.478664] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.833 [2024-06-11 09:30:44.478676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.833 [2024-06-11 09:30:44.490696] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.833 [2024-06-11 09:30:44.490707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.833 [2024-06-11 09:30:44.502728] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.833 [2024-06-11 09:30:44.502739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1095083) - No such process 00:16:12.833 09:30:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1095083 00:16:12.833 09:30:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:12.833 09:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:12.833 09:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:12.833 09:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:12.833 09:30:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:12.833 09:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:12.833 09:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:12.833 delay0 00:16:12.833 09:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:12.833 09:30:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:12.833 09:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:12.833 09:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:12.833 09:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:12.833 09:30:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:12.833 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.833 [2024-06-11 09:30:44.622292] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:19.422 Initializing NVMe Controllers 00:16:19.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:19.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:19.422 Initialization complete. Launching workers. 00:16:19.422 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 262, failed: 12427 00:16:19.422 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12606, failed to submit 83 00:16:19.422 success 12526, unsuccess 80, failed 0 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:19.422 rmmod nvme_tcp 00:16:19.422 rmmod nvme_fabrics 00:16:19.422 rmmod nvme_keyring 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1092367 ']' 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1092367 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 1092367 ']' 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 1092367 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:19.422 09:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1092367 00:16:19.422 09:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:19.422 09:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:19.422 09:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1092367' 00:16:19.422 killing process with pid 1092367 00:16:19.422 09:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 1092367 00:16:19.422 09:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 1092367 00:16:19.422 09:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:19.422 09:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:19.422 09:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:19.422 09:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.422 09:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:19.422 09:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.422 09:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.422 09:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.965 09:30:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:21.965 00:16:21.965 real 0m33.374s 00:16:21.965 user 0m44.749s 00:16:21.965 sys 0m10.447s 00:16:21.965 09:30:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:21.965 09:30:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:21.965 ************************************ 00:16:21.965 END TEST nvmf_zcopy 00:16:21.965 ************************************ 00:16:21.965 09:30:53 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:21.965 09:30:53 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:21.965 09:30:53 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:21.965 09:30:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:21.965 ************************************ 00:16:21.965 START TEST nvmf_nmic 00:16:21.965 ************************************ 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:21.965 * Looking for test storage... 00:16:21.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.965 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:21.966 09:30:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:28.545 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:28.545 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:28.545 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:28.545 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:28.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.794 ms 00:16:28.545 00:16:28.545 --- 10.0.0.2 ping statistics --- 00:16:28.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.545 rtt min/avg/max/mdev = 0.794/0.794/0.794/0.000 ms 00:16:28.545 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:28.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:16:28.546 00:16:28.546 --- 10.0.0.1 ping statistics --- 00:16:28.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.546 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:16:28.546 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.546 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:28.546 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:28.546 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.546 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:28.546 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:28.546 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.546 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:28.546 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:28.546 09:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:28.546 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:28.546 09:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:28.546 09:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:28.807 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1102393 00:16:28.807 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1102393 00:16:28.807 09:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:28.807 09:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 1102393 ']' 00:16:28.807 09:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.807 09:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:28.807 09:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.807 09:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:28.807 09:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:28.807 [2024-06-11 09:31:00.415426] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:16:28.807 [2024-06-11 09:31:00.415475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.807 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.807 [2024-06-11 09:31:00.495328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:28.807 [2024-06-11 09:31:00.562057] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.807 [2024-06-11 09:31:00.562093] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.807 [2024-06-11 09:31:00.562100] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.807 [2024-06-11 09:31:00.562106] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.807 [2024-06-11 09:31:00.562112] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.807 [2024-06-11 09:31:00.562218] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.807 [2024-06-11 09:31:00.562333] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.807 [2024-06-11 09:31:00.562433] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.807 [2024-06-11 09:31:00.562434] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:29.751 [2024-06-11 09:31:01.331149] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:29.751 Malloc0 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:29.751 [2024-06-11 09:31:01.390515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:29.751 test case1: single bdev can't be used in multiple subsystems 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:29.751 [2024-06-11 09:31:01.426448] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:29.751 [2024-06-11 09:31:01.426467] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:29.751 [2024-06-11 09:31:01.426474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.751 request: 00:16:29.751 { 00:16:29.751 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:29.751 "namespace": { 00:16:29.751 "bdev_name": "Malloc0", 00:16:29.751 "no_auto_visible": false 00:16:29.751 }, 00:16:29.751 "method": "nvmf_subsystem_add_ns", 00:16:29.751 "req_id": 1 00:16:29.751 } 00:16:29.751 Got JSON-RPC error response 00:16:29.751 response: 00:16:29.751 { 00:16:29.751 "code": -32602, 00:16:29.751 "message": "Invalid parameters" 00:16:29.751 } 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:29.751 Adding namespace failed - expected result. 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:29.751 test case2: host connect to nvmf target in multiple paths 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:29.751 [2024-06-11 09:31:01.438594] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.751 09:31:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:31.138 09:31:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:33.052 09:31:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:33.052 09:31:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:16:33.052 09:31:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:33.052 09:31:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:33.052 09:31:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:16:34.964 09:31:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:34.964 09:31:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:34.964 09:31:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:34.964 09:31:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:34.964 09:31:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:34.964 09:31:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:16:34.964 09:31:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:34.964 [global] 00:16:34.964 thread=1 00:16:34.964 invalidate=1 00:16:34.964 rw=write 00:16:34.964 time_based=1 00:16:34.964 runtime=1 00:16:34.964 ioengine=libaio 00:16:34.964 direct=1 00:16:34.964 bs=4096 00:16:34.964 iodepth=1 00:16:34.964 norandommap=0 00:16:34.964 numjobs=1 00:16:34.964 00:16:34.964 verify_dump=1 00:16:34.964 verify_backlog=512 00:16:34.964 verify_state_save=0 00:16:34.964 do_verify=1 00:16:34.964 verify=crc32c-intel 00:16:34.964 [job0] 00:16:34.964 filename=/dev/nvme0n1 00:16:34.964 Could not set queue depth (nvme0n1) 00:16:34.964 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:34.964 fio-3.35 00:16:34.964 Starting 1 thread 00:16:36.349 00:16:36.349 job0: (groupid=0, jobs=1): err= 0: pid=1103911: Tue Jun 11 09:31:07 2024 00:16:36.349 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:36.349 slat (nsec): min=23959, max=59634, avg=24837.38, stdev=3106.22 00:16:36.349 clat (usec): min=872, max=1276, avg=1104.63, stdev=65.90 00:16:36.349 lat (usec): min=897, max=1300, avg=1129.47, stdev=66.26 00:16:36.349 clat percentiles (usec): 00:16:36.349 | 1.00th=[ 914], 5.00th=[ 979], 10.00th=[ 1029], 20.00th=[ 1057], 00:16:36.349 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1139], 00:16:36.349 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1172], 95.00th=[ 1188], 00:16:36.349 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1270], 99.95th=[ 1270], 00:16:36.349 | 99.99th=[ 1270] 00:16:36.349 write: IOPS=536, BW=2146KiB/s (2197kB/s)(2148KiB/1001msec); 0 zone resets 00:16:36.349 slat (usec): min=9, max=27351, avg=79.03, stdev=1179.11 00:16:36.349 clat (usec): min=381, max=891, avg=690.83, stdev=92.07 00:16:36.349 lat (usec): min=392, max=28064, avg=769.86, stdev=1184.01 00:16:36.349 clat percentiles (usec): 00:16:36.349 | 1.00th=[ 453], 5.00th=[ 519], 10.00th=[ 553], 20.00th=[ 619], 00:16:36.349 | 30.00th=[ 660], 40.00th=[ 676], 50.00th=[ 701], 60.00th=[ 725], 00:16:36.349 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 799], 95.00th=[ 824], 00:16:36.349 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 889], 99.95th=[ 889], 00:16:36.349 | 99.99th=[ 889] 00:16:36.349 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:36.349 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:36.349 lat (usec) : 500=1.72%, 750=34.70%, 1000=17.92% 00:16:36.349 lat (msec) : 2=45.66% 00:16:36.349 cpu : usr=1.70%, sys=2.70%, ctx=1052, majf=0, minf=1 00:16:36.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:36.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.349 issued rwts: total=512,537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:36.349 00:16:36.349 Run status group 0 (all jobs): 00:16:36.349 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:16:36.349 WRITE: bw=2146KiB/s (2197kB/s), 2146KiB/s-2146KiB/s (2197kB/s-2197kB/s), io=2148KiB (2200kB), run=1001-1001msec 00:16:36.349 00:16:36.349 Disk stats (read/write): 00:16:36.349 nvme0n1: ios=464/512, merge=0/0, ticks=1451/338, in_queue=1789, util=98.90% 00:16:36.349 09:31:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:36.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:36.349 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:36.349 rmmod nvme_tcp 00:16:36.349 rmmod nvme_fabrics 00:16:36.349 rmmod nvme_keyring 00:16:36.609 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:36.609 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:36.609 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:36.609 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1102393 ']' 00:16:36.609 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1102393 00:16:36.609 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 1102393 ']' 00:16:36.609 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 1102393 00:16:36.609 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1102393 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1102393' 00:16:36.610 killing process with pid 1102393 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 1102393 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 1102393 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.610 09:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.156 09:31:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:39.156 00:16:39.156 real 0m17.166s 00:16:39.156 user 0m49.625s 00:16:39.156 sys 0m5.840s 00:16:39.156 09:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:39.156 09:31:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:39.156 ************************************ 00:16:39.156 END TEST nvmf_nmic 00:16:39.156 ************************************ 00:16:39.156 09:31:10 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:39.156 09:31:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:39.156 09:31:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:39.156 09:31:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:39.156 ************************************ 00:16:39.156 START TEST nvmf_fio_target 00:16:39.156 ************************************ 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:39.156 * Looking for test storage... 00:16:39.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:39.156 09:31:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:45.770 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:45.770 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.770 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:45.771 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:45.771 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:45.771 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.032 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:46.032 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:46.032 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:46.032 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:46.032 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:46.032 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:46.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:16:46.293 00:16:46.293 --- 10.0.0.2 ping statistics --- 00:16:46.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.293 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:16:46.293 00:16:46.293 --- 10.0.0.1 ping statistics --- 00:16:46.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.293 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1109265 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1109265 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 1109265 ']' 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:46.293 09:31:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.293 [2024-06-11 09:31:17.988113] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:16:46.293 [2024-06-11 09:31:17.988164] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.293 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.293 [2024-06-11 09:31:18.069266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.554 [2024-06-11 09:31:18.145691] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.554 [2024-06-11 09:31:18.145741] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.554 [2024-06-11 09:31:18.145748] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.554 [2024-06-11 09:31:18.145761] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.554 [2024-06-11 09:31:18.145766] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.554 [2024-06-11 09:31:18.145891] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.554 [2024-06-11 09:31:18.146034] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.554 [2024-06-11 09:31:18.146200] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.554 [2024-06-11 09:31:18.146201] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.125 09:31:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:47.125 09:31:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:16:47.125 09:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:47.125 09:31:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:47.125 09:31:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.125 09:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.125 09:31:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:47.386 [2024-06-11 09:31:19.093826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.386 09:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.647 09:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:47.647 09:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.908 09:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:47.908 09:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:48.168 09:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:48.168 09:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:48.429 09:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:48.429 09:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:48.429 09:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:48.690 09:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:48.690 09:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:48.952 09:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:48.952 09:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.213 09:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:49.213 09:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:49.213 09:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:49.474 09:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:49.474 09:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:49.734 09:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:49.734 09:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:49.995 09:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.272 [2024-06-11 09:31:21.825243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.272 09:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:50.272 09:31:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:50.535 09:31:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:52.452 09:31:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:52.452 09:31:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:16:52.452 09:31:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:52.452 09:31:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:16:52.452 09:31:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:16:52.452 09:31:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:16:54.393 09:31:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:54.393 09:31:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:54.393 09:31:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.393 09:31:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:16:54.393 09:31:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.393 09:31:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:16:54.393 09:31:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:54.393 [global] 00:16:54.393 thread=1 00:16:54.393 invalidate=1 00:16:54.393 rw=write 00:16:54.393 time_based=1 00:16:54.393 runtime=1 00:16:54.393 ioengine=libaio 00:16:54.393 direct=1 00:16:54.393 bs=4096 00:16:54.393 iodepth=1 00:16:54.393 norandommap=0 00:16:54.393 numjobs=1 00:16:54.393 00:16:54.393 verify_dump=1 00:16:54.393 verify_backlog=512 00:16:54.393 verify_state_save=0 00:16:54.393 do_verify=1 00:16:54.393 verify=crc32c-intel 00:16:54.393 [job0] 00:16:54.393 filename=/dev/nvme0n1 00:16:54.393 [job1] 00:16:54.393 filename=/dev/nvme0n2 00:16:54.393 [job2] 00:16:54.393 filename=/dev/nvme0n3 00:16:54.393 [job3] 00:16:54.393 filename=/dev/nvme0n4 00:16:54.393 Could not set queue depth (nvme0n1) 00:16:54.393 Could not set queue depth (nvme0n2) 00:16:54.393 Could not set queue depth (nvme0n3) 00:16:54.394 Could not set queue depth (nvme0n4) 00:16:54.665 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.665 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.665 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.665 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.665 fio-3.35 00:16:54.665 Starting 4 threads 00:16:56.070 00:16:56.070 job0: (groupid=0, jobs=1): err= 0: pid=1111596: Tue Jun 11 09:31:27 2024 00:16:56.070 read: IOPS=15, BW=62.5KiB/s (64.0kB/s)(64.0KiB/1024msec) 00:16:56.070 slat (nsec): min=7603, max=27785, avg=25462.44, stdev=4778.67 00:16:56.070 clat (usec): min=1116, max=42023, avg=39297.67, stdev=10186.28 00:16:56.070 lat (usec): min=1142, max=42049, avg=39323.13, stdev=10186.12 00:16:56.070 clat percentiles (usec): 00:16:56.070 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[41157], 20.00th=[41681], 00:16:56.070 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:16:56.070 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:56.070 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:56.070 | 99.99th=[42206] 00:16:56.070 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:16:56.070 slat (nsec): min=9441, max=53382, avg=31945.02, stdev=9310.98 00:16:56.070 clat (usec): min=400, max=999, avg=731.10, stdev=107.16 00:16:56.070 lat (usec): min=411, max=1034, avg=763.04, stdev=111.26 00:16:56.070 clat percentiles (usec): 00:16:56.070 | 1.00th=[ 433], 5.00th=[ 545], 10.00th=[ 586], 20.00th=[ 652], 00:16:56.070 | 30.00th=[ 676], 40.00th=[ 701], 50.00th=[ 725], 60.00th=[ 775], 00:16:56.070 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 865], 95.00th=[ 889], 00:16:56.070 | 99.00th=[ 955], 99.50th=[ 971], 99.90th=[ 996], 99.95th=[ 996], 00:16:56.070 | 99.99th=[ 996] 00:16:56.070 bw ( KiB/s): min= 4096, max= 4096, per=51.55%, avg=4096.00, stdev= 0.00, samples=1 00:16:56.070 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:56.070 lat (usec) : 500=2.08%, 750=50.95%, 1000=43.94% 00:16:56.070 lat (msec) : 2=0.19%, 50=2.84% 00:16:56.070 cpu : usr=1.47%, sys=1.56%, ctx=531, majf=0, minf=1 00:16:56.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.070 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.070 job1: (groupid=0, jobs=1): err= 0: pid=1111600: Tue Jun 11 09:31:27 2024 00:16:56.070 read: IOPS=15, BW=62.6KiB/s (64.1kB/s)(64.0KiB/1022msec) 00:16:56.070 slat (nsec): min=10700, max=30948, avg=25471.81, stdev=4120.50 00:16:56.070 clat (usec): min=999, max=42030, avg=39372.84, stdev=10233.66 00:16:56.070 lat (usec): min=1010, max=42056, avg=39398.32, stdev=10237.60 00:16:56.070 clat percentiles (usec): 00:16:56.070 | 1.00th=[ 1004], 5.00th=[ 1004], 10.00th=[41681], 20.00th=[41681], 00:16:56.070 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:56.070 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:56.070 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:56.070 | 99.99th=[42206] 00:16:56.070 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:16:56.070 slat (nsec): min=8974, max=67597, avg=30530.28, stdev=9297.30 00:16:56.070 clat (usec): min=346, max=983, avg=726.81, stdev=101.36 00:16:56.070 lat (usec): min=380, max=1029, avg=757.34, stdev=105.87 00:16:56.070 clat percentiles (usec): 00:16:56.070 | 1.00th=[ 457], 5.00th=[ 545], 10.00th=[ 594], 20.00th=[ 652], 00:16:56.070 | 30.00th=[ 676], 40.00th=[ 701], 50.00th=[ 734], 60.00th=[ 758], 00:16:56.070 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 873], 00:16:56.070 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 988], 99.95th=[ 988], 00:16:56.070 | 99.99th=[ 988] 00:16:56.070 bw ( KiB/s): min= 4096, max= 4096, per=51.55%, avg=4096.00, stdev= 0.00, samples=1 00:16:56.070 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:56.070 lat (usec) : 500=2.84%, 750=52.46%, 1000=41.86% 00:16:56.070 lat (msec) : 50=2.84% 00:16:56.070 cpu : usr=1.76%, sys=1.18%, ctx=529, majf=0, minf=1 00:16:56.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.070 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.070 job2: (groupid=0, jobs=1): err= 0: pid=1111616: Tue Jun 11 09:31:27 2024 00:16:56.070 read: IOPS=15, BW=63.4KiB/s (64.9kB/s)(64.0KiB/1010msec) 00:16:56.070 slat (nsec): min=25497, max=26474, avg=25956.63, stdev=283.69 00:16:56.070 clat (usec): min=40897, max=41036, avg=40965.12, stdev=40.49 00:16:56.070 lat (usec): min=40924, max=41063, avg=40991.08, stdev=40.52 00:16:56.070 clat percentiles (usec): 00:16:56.070 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:56.070 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:56.070 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:56.070 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:56.070 | 99.99th=[41157] 00:16:56.070 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:16:56.070 slat (usec): min=8, max=106, avg=27.50, stdev=11.19 00:16:56.070 clat (usec): min=228, max=1194, avg=656.88, stdev=148.62 00:16:56.070 lat (usec): min=239, max=1226, avg=684.38, stdev=152.42 00:16:56.070 clat percentiles (usec): 00:16:56.070 | 1.00th=[ 314], 5.00th=[ 420], 10.00th=[ 449], 20.00th=[ 515], 00:16:56.070 | 30.00th=[ 586], 40.00th=[ 627], 50.00th=[ 676], 60.00th=[ 709], 00:16:56.070 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 840], 95.00th=[ 889], 00:16:56.070 | 99.00th=[ 955], 99.50th=[ 988], 99.90th=[ 1188], 99.95th=[ 1188], 00:16:56.070 | 99.99th=[ 1188] 00:16:56.070 bw ( KiB/s): min= 4096, max= 4096, per=51.55%, avg=4096.00, stdev= 0.00, samples=1 00:16:56.070 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:56.070 lat (usec) : 250=0.19%, 500=17.61%, 750=53.22%, 1000=25.57% 00:16:56.070 lat (msec) : 2=0.38%, 50=3.03% 00:16:56.070 cpu : usr=0.79%, sys=1.49%, ctx=528, majf=0, minf=1 00:16:56.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.070 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.070 job3: (groupid=0, jobs=1): err= 0: pid=1111623: Tue Jun 11 09:31:27 2024 00:16:56.070 read: IOPS=17, BW=69.8KiB/s (71.5kB/s)(72.0KiB/1031msec) 00:16:56.071 slat (nsec): min=6987, max=25257, avg=22918.11, stdev=5372.37 00:16:56.071 clat (usec): min=857, max=42176, avg=39666.65, stdev=9686.40 00:16:56.071 lat (usec): min=867, max=42201, avg=39689.56, stdev=9689.76 00:16:56.071 clat percentiles (usec): 00:16:56.071 | 1.00th=[ 857], 5.00th=[ 857], 10.00th=[41681], 20.00th=[41681], 00:16:56.071 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:16:56.071 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:56.071 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:56.071 | 99.99th=[42206] 00:16:56.071 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:16:56.071 slat (nsec): min=9292, max=72476, avg=30298.00, stdev=7502.09 00:16:56.071 clat (usec): min=207, max=1052, avg=579.94, stdev=145.98 00:16:56.071 lat (usec): min=217, max=1084, avg=610.24, stdev=148.04 00:16:56.071 clat percentiles (usec): 00:16:56.071 | 1.00th=[ 231], 5.00th=[ 334], 10.00th=[ 371], 20.00th=[ 453], 00:16:56.071 | 30.00th=[ 506], 40.00th=[ 545], 50.00th=[ 594], 60.00th=[ 627], 00:16:56.071 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 766], 95.00th=[ 799], 00:16:56.071 | 99.00th=[ 898], 99.50th=[ 938], 99.90th=[ 1057], 99.95th=[ 1057], 00:16:56.071 | 99.99th=[ 1057] 00:16:56.071 bw ( KiB/s): min= 4096, max= 4096, per=51.55%, avg=4096.00, stdev= 0.00, samples=1 00:16:56.071 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:56.071 lat (usec) : 250=1.51%, 500=25.85%, 750=57.36%, 1000=11.70% 00:16:56.071 lat (msec) : 2=0.38%, 50=3.21% 00:16:56.071 cpu : usr=0.78%, sys=1.46%, ctx=531, majf=0, minf=1 00:16:56.071 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.071 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.071 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.071 00:16:56.071 Run status group 0 (all jobs): 00:16:56.071 READ: bw=256KiB/s (262kB/s), 62.5KiB/s-69.8KiB/s (64.0kB/s-71.5kB/s), io=264KiB (270kB), run=1010-1031msec 00:16:56.071 WRITE: bw=7946KiB/s (8136kB/s), 1986KiB/s-2028KiB/s (2034kB/s-2076kB/s), io=8192KiB (8389kB), run=1010-1031msec 00:16:56.071 00:16:56.071 Disk stats (read/write): 00:16:56.071 nvme0n1: ios=67/512, merge=0/0, ticks=1035/313, in_queue=1348, util=99.80% 00:16:56.071 nvme0n2: ios=47/512, merge=0/0, ticks=471/316, in_queue=787, util=87.64% 00:16:56.071 nvme0n3: ios=11/512, merge=0/0, ticks=451/326, in_queue=777, util=88.37% 00:16:56.071 nvme0n4: ios=13/512, merge=0/0, ticks=505/275, in_queue=780, util=89.51% 00:16:56.071 09:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:56.071 [global] 00:16:56.071 thread=1 00:16:56.071 invalidate=1 00:16:56.071 rw=randwrite 00:16:56.071 time_based=1 00:16:56.071 runtime=1 00:16:56.071 ioengine=libaio 00:16:56.071 direct=1 00:16:56.071 bs=4096 00:16:56.071 iodepth=1 00:16:56.071 norandommap=0 00:16:56.071 numjobs=1 00:16:56.071 00:16:56.071 verify_dump=1 00:16:56.071 verify_backlog=512 00:16:56.071 verify_state_save=0 00:16:56.071 do_verify=1 00:16:56.071 verify=crc32c-intel 00:16:56.071 [job0] 00:16:56.071 filename=/dev/nvme0n1 00:16:56.071 [job1] 00:16:56.071 filename=/dev/nvme0n2 00:16:56.071 [job2] 00:16:56.071 filename=/dev/nvme0n3 00:16:56.071 [job3] 00:16:56.071 filename=/dev/nvme0n4 00:16:56.071 Could not set queue depth (nvme0n1) 00:16:56.071 Could not set queue depth (nvme0n2) 00:16:56.071 Could not set queue depth (nvme0n3) 00:16:56.071 Could not set queue depth (nvme0n4) 00:16:56.336 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.336 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.336 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.336 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.336 fio-3.35 00:16:56.336 Starting 4 threads 00:16:57.747 00:16:57.747 job0: (groupid=0, jobs=1): err= 0: pid=1112177: Tue Jun 11 09:31:29 2024 00:16:57.747 read: IOPS=17, BW=70.7KiB/s (72.4kB/s)(72.0KiB/1018msec) 00:16:57.747 slat (nsec): min=10286, max=25535, avg=24267.56, stdev=3497.20 00:16:57.747 clat (usec): min=40886, max=42071, avg=41385.56, stdev=479.40 00:16:57.747 lat (usec): min=40911, max=42096, avg=41409.83, stdev=479.13 00:16:57.747 clat percentiles (usec): 00:16:57.747 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:57.747 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:16:57.747 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:16:57.747 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:57.747 | 99.99th=[42206] 00:16:57.747 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:16:57.747 slat (nsec): min=9072, max=68326, avg=21254.64, stdev=11350.62 00:16:57.747 clat (usec): min=194, max=775, avg=504.99, stdev=79.11 00:16:57.747 lat (usec): min=228, max=785, avg=526.24, stdev=81.10 00:16:57.747 clat percentiles (usec): 00:16:57.747 | 1.00th=[ 330], 5.00th=[ 375], 10.00th=[ 408], 20.00th=[ 441], 00:16:57.747 | 30.00th=[ 461], 40.00th=[ 482], 50.00th=[ 498], 60.00th=[ 529], 00:16:57.747 | 70.00th=[ 553], 80.00th=[ 570], 90.00th=[ 603], 95.00th=[ 627], 00:16:57.747 | 99.00th=[ 709], 99.50th=[ 758], 99.90th=[ 775], 99.95th=[ 775], 00:16:57.747 | 99.99th=[ 775] 00:16:57.747 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:16:57.747 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:57.747 lat (usec) : 250=0.19%, 500=49.06%, 750=46.79%, 1000=0.57% 00:16:57.747 lat (msec) : 50=3.40% 00:16:57.747 cpu : usr=0.39%, sys=1.28%, ctx=531, majf=0, minf=1 00:16:57.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.747 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.747 job1: (groupid=0, jobs=1): err= 0: pid=1112195: Tue Jun 11 09:31:29 2024 00:16:57.747 read: IOPS=14, BW=59.6KiB/s (61.0kB/s)(60.0KiB/1007msec) 00:16:57.747 slat (nsec): min=24984, max=25626, avg=25199.13, stdev=195.81 00:16:57.747 clat (usec): min=41762, max=42168, avg=41970.30, stdev=159.78 00:16:57.747 lat (usec): min=41787, max=42193, avg=41995.50, stdev=159.77 00:16:57.747 clat percentiles (usec): 00:16:57.747 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:16:57.747 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:57.747 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:57.747 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:57.747 | 99.99th=[42206] 00:16:57.747 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:16:57.747 slat (usec): min=9, max=4775, avg=39.43, stdev=209.83 00:16:57.747 clat (usec): min=294, max=1173, avg=686.17, stdev=134.22 00:16:57.747 lat (usec): min=306, max=5572, avg=725.60, stdev=254.14 00:16:57.747 clat percentiles (usec): 00:16:57.747 | 1.00th=[ 343], 5.00th=[ 482], 10.00th=[ 523], 20.00th=[ 578], 00:16:57.747 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[ 717], 00:16:57.747 | 70.00th=[ 758], 80.00th=[ 799], 90.00th=[ 848], 95.00th=[ 906], 00:16:57.747 | 99.00th=[ 996], 99.50th=[ 1004], 99.90th=[ 1172], 99.95th=[ 1172], 00:16:57.747 | 99.99th=[ 1172] 00:16:57.747 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:16:57.747 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:57.747 lat (usec) : 500=7.21%, 750=59.20%, 1000=29.98% 00:16:57.747 lat (msec) : 2=0.76%, 50=2.85% 00:16:57.747 cpu : usr=1.29%, sys=1.09%, ctx=531, majf=0, minf=1 00:16:57.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.747 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.747 job2: (groupid=0, jobs=1): err= 0: pid=1112217: Tue Jun 11 09:31:29 2024 00:16:57.747 read: IOPS=17, BW=69.6KiB/s (71.3kB/s)(72.0KiB/1034msec) 00:16:57.747 slat (nsec): min=25086, max=26396, avg=25492.89, stdev=316.39 00:16:57.747 clat (usec): min=41910, max=42013, avg=41967.30, stdev=32.44 00:16:57.747 lat (usec): min=41936, max=42038, avg=41992.80, stdev=32.28 00:16:57.747 clat percentiles (usec): 00:16:57.747 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:16:57.747 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:57.747 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:57.747 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:57.747 | 99.99th=[42206] 00:16:57.747 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:16:57.747 slat (nsec): min=9452, max=53116, avg=29373.31, stdev=9067.50 00:16:57.747 clat (usec): min=154, max=869, avg=503.10, stdev=125.70 00:16:57.747 lat (usec): min=186, max=896, avg=532.48, stdev=127.79 00:16:57.747 clat percentiles (usec): 00:16:57.747 | 1.00th=[ 269], 5.00th=[ 302], 10.00th=[ 347], 20.00th=[ 408], 00:16:57.747 | 30.00th=[ 429], 40.00th=[ 449], 50.00th=[ 478], 60.00th=[ 529], 00:16:57.747 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 676], 95.00th=[ 709], 00:16:57.747 | 99.00th=[ 791], 99.50th=[ 848], 99.90th=[ 873], 99.95th=[ 873], 00:16:57.747 | 99.99th=[ 873] 00:16:57.747 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:16:57.747 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:57.747 lat (usec) : 250=0.57%, 500=52.64%, 750=41.13%, 1000=2.26% 00:16:57.747 lat (msec) : 50=3.40% 00:16:57.747 cpu : usr=0.68%, sys=1.55%, ctx=532, majf=0, minf=1 00:16:57.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.747 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.748 job3: (groupid=0, jobs=1): err= 0: pid=1112224: Tue Jun 11 09:31:29 2024 00:16:57.748 read: IOPS=16, BW=67.9KiB/s (69.5kB/s)(68.0KiB/1002msec) 00:16:57.748 slat (nsec): min=9813, max=25727, avg=24310.88, stdev=3742.19 00:16:57.748 clat (usec): min=1083, max=42253, avg=39544.35, stdev=9911.89 00:16:57.748 lat (usec): min=1093, max=42278, avg=39568.66, stdev=9915.63 00:16:57.748 clat percentiles (usec): 00:16:57.748 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[41681], 20.00th=[41681], 00:16:57.748 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:57.748 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:57.748 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:57.748 | 99.99th=[42206] 00:16:57.748 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:16:57.748 slat (nsec): min=9594, max=61459, avg=23521.99, stdev=11479.46 00:16:57.748 clat (usec): min=185, max=1103, avg=612.52, stdev=220.34 00:16:57.748 lat (usec): min=195, max=1119, avg=636.05, stdev=227.93 00:16:57.748 clat percentiles (usec): 00:16:57.748 | 1.00th=[ 237], 5.00th=[ 265], 10.00th=[ 293], 20.00th=[ 347], 00:16:57.748 | 30.00th=[ 457], 40.00th=[ 586], 50.00th=[ 660], 60.00th=[ 717], 00:16:57.748 | 70.00th=[ 766], 80.00th=[ 816], 90.00th=[ 873], 95.00th=[ 930], 00:16:57.748 | 99.00th=[ 1012], 99.50th=[ 1057], 99.90th=[ 1106], 99.95th=[ 1106], 00:16:57.748 | 99.99th=[ 1106] 00:16:57.748 bw ( KiB/s): min= 4096, max= 4096, per=51.70%, avg=4096.00, stdev= 0.00, samples=1 00:16:57.748 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:57.748 lat (usec) : 250=3.21%, 500=29.68%, 750=30.62%, 1000=31.95% 00:16:57.748 lat (msec) : 2=1.51%, 50=3.02% 00:16:57.748 cpu : usr=0.40%, sys=1.40%, ctx=532, majf=0, minf=1 00:16:57.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.748 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.748 00:16:57.748 Run status group 0 (all jobs): 00:16:57.748 READ: bw=263KiB/s (269kB/s), 59.6KiB/s-70.7KiB/s (61.0kB/s-72.4kB/s), io=272KiB (279kB), run=1002-1034msec 00:16:57.748 WRITE: bw=7923KiB/s (8113kB/s), 1981KiB/s-2044KiB/s (2028kB/s-2093kB/s), io=8192KiB (8389kB), run=1002-1034msec 00:16:57.748 00:16:57.748 Disk stats (read/write): 00:16:57.748 nvme0n1: ios=63/512, merge=0/0, ticks=652/253, in_queue=905, util=91.48% 00:16:57.748 nvme0n2: ios=57/512, merge=0/0, ticks=896/335, in_queue=1231, util=97.04% 00:16:57.748 nvme0n3: ios=51/512, merge=0/0, ticks=1432/239, in_queue=1671, util=98.51% 00:16:57.748 nvme0n4: ios=63/512, merge=0/0, ticks=635/302, in_queue=937, util=100.00% 00:16:57.748 09:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:57.748 [global] 00:16:57.748 thread=1 00:16:57.748 invalidate=1 00:16:57.748 rw=write 00:16:57.748 time_based=1 00:16:57.748 runtime=1 00:16:57.748 ioengine=libaio 00:16:57.748 direct=1 00:16:57.748 bs=4096 00:16:57.748 iodepth=128 00:16:57.748 norandommap=0 00:16:57.748 numjobs=1 00:16:57.748 00:16:57.748 verify_dump=1 00:16:57.748 verify_backlog=512 00:16:57.748 verify_state_save=0 00:16:57.748 do_verify=1 00:16:57.748 verify=crc32c-intel 00:16:57.748 [job0] 00:16:57.748 filename=/dev/nvme0n1 00:16:57.748 [job1] 00:16:57.748 filename=/dev/nvme0n2 00:16:57.748 [job2] 00:16:57.748 filename=/dev/nvme0n3 00:16:57.748 [job3] 00:16:57.748 filename=/dev/nvme0n4 00:16:57.748 Could not set queue depth (nvme0n1) 00:16:57.748 Could not set queue depth (nvme0n2) 00:16:57.748 Could not set queue depth (nvme0n3) 00:16:57.748 Could not set queue depth (nvme0n4) 00:16:58.016 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.016 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.016 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.016 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.016 fio-3.35 00:16:58.016 Starting 4 threads 00:16:59.433 00:16:59.433 job0: (groupid=0, jobs=1): err= 0: pid=1112807: Tue Jun 11 09:31:30 2024 00:16:59.433 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:16:59.433 slat (nsec): min=1232, max=12173k, avg=99141.81, stdev=623498.26 00:16:59.433 clat (usec): min=6878, max=40314, avg=12297.24, stdev=4539.83 00:16:59.433 lat (usec): min=6881, max=40329, avg=12396.39, stdev=4592.07 00:16:59.433 clat percentiles (usec): 00:16:59.433 | 1.00th=[ 7635], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[ 9503], 00:16:59.433 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[11731], 00:16:59.433 | 70.00th=[12649], 80.00th=[14746], 90.00th=[18220], 95.00th=[20055], 00:16:59.433 | 99.00th=[28443], 99.50th=[38011], 99.90th=[39584], 99.95th=[40109], 00:16:59.433 | 99.99th=[40109] 00:16:59.433 write: IOPS=4990, BW=19.5MiB/s (20.4MB/s)(19.6MiB/1005msec); 0 zone resets 00:16:59.433 slat (usec): min=2, max=16300, avg=103.61, stdev=554.36 00:16:59.433 clat (usec): min=4037, max=48375, avg=14062.28, stdev=7884.79 00:16:59.433 lat (usec): min=4793, max=48386, avg=14165.90, stdev=7935.16 00:16:59.433 clat percentiles (usec): 00:16:59.433 | 1.00th=[ 6521], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[ 9241], 00:16:59.433 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[11338], 00:16:59.433 | 70.00th=[14353], 80.00th=[19268], 90.00th=[26084], 95.00th=[33424], 00:16:59.433 | 99.00th=[40109], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:16:59.433 | 99.99th=[48497] 00:16:59.433 bw ( KiB/s): min=16848, max=22256, per=23.20%, avg=19552.00, stdev=3824.03, samples=2 00:16:59.433 iops : min= 4212, max= 5564, avg=4888.00, stdev=956.01, samples=2 00:16:59.433 lat (msec) : 10=45.84%, 20=42.62%, 50=11.55% 00:16:59.433 cpu : usr=3.49%, sys=4.28%, ctx=649, majf=0, minf=1 00:16:59.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:59.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:59.433 issued rwts: total=4608,5015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:59.433 job1: (groupid=0, jobs=1): err= 0: pid=1112826: Tue Jun 11 09:31:30 2024 00:16:59.433 read: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec) 00:16:59.433 slat (nsec): min=1278, max=8977.4k, avg=69213.62, stdev=503145.99 00:16:59.433 clat (usec): min=2647, max=29928, avg=9539.49, stdev=3810.28 00:16:59.433 lat (usec): min=2652, max=29951, avg=9608.71, stdev=3840.60 00:16:59.433 clat percentiles (usec): 00:16:59.433 | 1.00th=[ 4752], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6521], 00:16:59.433 | 30.00th=[ 7111], 40.00th=[ 7898], 50.00th=[ 8356], 60.00th=[ 9503], 00:16:59.433 | 70.00th=[10290], 80.00th=[12125], 90.00th=[13960], 95.00th=[18744], 00:16:59.433 | 99.00th=[22152], 99.50th=[23462], 99.90th=[24511], 99.95th=[24511], 00:16:59.433 | 99.99th=[30016] 00:16:59.433 write: IOPS=6789, BW=26.5MiB/s (27.8MB/s)(26.8MiB/1009msec); 0 zone resets 00:16:59.433 slat (usec): min=2, max=13649, avg=69.19, stdev=445.07 00:16:59.433 clat (usec): min=1189, max=35614, avg=9414.23, stdev=6021.73 00:16:59.433 lat (usec): min=1199, max=35619, avg=9483.42, stdev=6063.31 00:16:59.433 clat percentiles (usec): 00:16:59.433 | 1.00th=[ 2442], 5.00th=[ 3720], 10.00th=[ 4047], 20.00th=[ 5276], 00:16:59.433 | 30.00th=[ 6325], 40.00th=[ 6980], 50.00th=[ 7439], 60.00th=[ 8356], 00:16:59.433 | 70.00th=[10159], 80.00th=[11600], 90.00th=[16712], 95.00th=[24249], 00:16:59.433 | 99.00th=[32637], 99.50th=[33162], 99.90th=[35390], 99.95th=[35390], 00:16:59.433 | 99.99th=[35390] 00:16:59.433 bw ( KiB/s): min=22072, max=31720, per=31.91%, avg=26896.00, stdev=6822.17, samples=2 00:16:59.433 iops : min= 5518, max= 7930, avg=6724.00, stdev=1705.54, samples=2 00:16:59.433 lat (msec) : 2=0.07%, 4=4.70%, 10=63.12%, 20=26.22%, 50=5.89% 00:16:59.433 cpu : usr=4.56%, sys=7.44%, ctx=539, majf=0, minf=1 00:16:59.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:16:59.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:59.433 issued rwts: total=6656,6851,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:59.433 job2: (groupid=0, jobs=1): err= 0: pid=1112846: Tue Jun 11 09:31:30 2024 00:16:59.433 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:16:59.433 slat (nsec): min=1356, max=10688k, avg=100672.08, stdev=616508.74 00:16:59.433 clat (usec): min=7089, max=31964, avg=12465.09, stdev=3495.85 00:16:59.433 lat (usec): min=7096, max=32000, avg=12565.77, stdev=3546.07 00:16:59.433 clat percentiles (usec): 00:16:59.433 | 1.00th=[ 7767], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10683], 00:16:59.433 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11863], 00:16:59.433 | 70.00th=[12780], 80.00th=[13566], 90.00th=[16581], 95.00th=[21365], 00:16:59.433 | 99.00th=[25035], 99.50th=[28705], 99.90th=[28705], 99.95th=[29754], 00:16:59.433 | 99.99th=[31851] 00:16:59.433 write: IOPS=4817, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1004msec); 0 zone resets 00:16:59.433 slat (usec): min=2, max=52782, avg=105.83, stdev=916.68 00:16:59.433 clat (usec): min=667, max=71948, avg=12862.54, stdev=5028.93 00:16:59.433 lat (usec): min=5158, max=71978, avg=12968.37, stdev=5127.84 00:16:59.433 clat percentiles (usec): 00:16:59.433 | 1.00th=[ 5735], 5.00th=[ 8979], 10.00th=[10290], 20.00th=[10814], 00:16:59.433 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11731], 00:16:59.433 | 70.00th=[12387], 80.00th=[13698], 90.00th=[19268], 95.00th=[22676], 00:16:59.433 | 99.00th=[25035], 99.50th=[25035], 99.90th=[71828], 99.95th=[71828], 00:16:59.433 | 99.99th=[71828] 00:16:59.433 bw ( KiB/s): min=16384, max=21288, per=22.35%, avg=18836.00, stdev=3467.65, samples=2 00:16:59.433 iops : min= 4096, max= 5322, avg=4709.00, stdev=866.91, samples=2 00:16:59.433 lat (usec) : 750=0.01% 00:16:59.433 lat (msec) : 10=10.68%, 20=81.51%, 50=7.63%, 100=0.16% 00:16:59.433 cpu : usr=3.19%, sys=5.58%, ctx=554, majf=0, minf=1 00:16:59.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:59.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:59.433 issued rwts: total=4608,4837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:59.433 job3: (groupid=0, jobs=1): err= 0: pid=1112853: Tue Jun 11 09:31:30 2024 00:16:59.433 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:16:59.433 slat (nsec): min=1321, max=10877k, avg=100644.22, stdev=639096.10 00:16:59.433 clat (usec): min=2744, max=42714, avg=13022.23, stdev=7857.73 00:16:59.433 lat (usec): min=2750, max=42722, avg=13122.87, stdev=7920.59 00:16:59.433 clat percentiles (usec): 00:16:59.433 | 1.00th=[ 4621], 5.00th=[ 6390], 10.00th=[ 7242], 20.00th=[ 7898], 00:16:59.433 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[10159], 00:16:59.434 | 70.00th=[11600], 80.00th=[21365], 90.00th=[26346], 95.00th=[29754], 00:16:59.434 | 99.00th=[37487], 99.50th=[37487], 99.90th=[42730], 99.95th=[42730], 00:16:59.434 | 99.99th=[42730] 00:16:59.434 write: IOPS=4531, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1006msec); 0 zone resets 00:16:59.434 slat (nsec): min=1990, max=6786.2k, avg=119096.82, stdev=566612.67 00:16:59.434 clat (usec): min=712, max=60121, avg=16312.17, stdev=13885.22 00:16:59.434 lat (usec): min=721, max=60129, avg=16431.26, stdev=13978.75 00:16:59.434 clat percentiles (usec): 00:16:59.434 | 1.00th=[ 2278], 5.00th=[ 5866], 10.00th=[ 6849], 20.00th=[ 7701], 00:16:59.434 | 30.00th=[ 7898], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[12387], 00:16:59.434 | 70.00th=[18220], 80.00th=[22938], 90.00th=[38536], 95.00th=[53740], 00:16:59.434 | 99.00th=[58983], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:16:59.434 | 99.99th=[60031] 00:16:59.434 bw ( KiB/s): min=17320, max=18136, per=21.03%, avg=17728.00, stdev=577.00, samples=2 00:16:59.434 iops : min= 4330, max= 4534, avg=4432.00, stdev=144.25, samples=2 00:16:59.434 lat (usec) : 750=0.03% 00:16:59.434 lat (msec) : 2=0.39%, 4=0.62%, 10=55.32%, 20=19.26%, 50=20.73% 00:16:59.434 lat (msec) : 100=3.64% 00:16:59.434 cpu : usr=3.38%, sys=4.37%, ctx=504, majf=0, minf=1 00:16:59.434 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:59.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:59.434 issued rwts: total=4096,4559,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.434 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:59.434 00:16:59.434 Run status group 0 (all jobs): 00:16:59.434 READ: bw=77.3MiB/s (81.1MB/s), 15.9MiB/s-25.8MiB/s (16.7MB/s-27.0MB/s), io=78.0MiB (81.8MB), run=1004-1009msec 00:16:59.434 WRITE: bw=82.3MiB/s (86.3MB/s), 17.7MiB/s-26.5MiB/s (18.6MB/s-27.8MB/s), io=83.1MiB (87.1MB), run=1004-1009msec 00:16:59.434 00:16:59.434 Disk stats (read/write): 00:16:59.434 nvme0n1: ios=3634/3852, merge=0/0, ticks=23497/27952, in_queue=51449, util=86.97% 00:16:59.434 nvme0n2: ios=5671/5947, merge=0/0, ticks=43940/41498, in_queue=85438, util=96.74% 00:16:59.434 nvme0n3: ios=3611/3951, merge=0/0, ticks=23391/23841, in_queue=47232, util=100.00% 00:16:59.434 nvme0n4: ios=3584/3919, merge=0/0, ticks=24120/30263, in_queue=54383, util=88.91% 00:16:59.434 09:31:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:59.434 [global] 00:16:59.434 thread=1 00:16:59.434 invalidate=1 00:16:59.434 rw=randwrite 00:16:59.434 time_based=1 00:16:59.434 runtime=1 00:16:59.434 ioengine=libaio 00:16:59.434 direct=1 00:16:59.434 bs=4096 00:16:59.434 iodepth=128 00:16:59.434 norandommap=0 00:16:59.434 numjobs=1 00:16:59.434 00:16:59.434 verify_dump=1 00:16:59.434 verify_backlog=512 00:16:59.434 verify_state_save=0 00:16:59.434 do_verify=1 00:16:59.434 verify=crc32c-intel 00:16:59.434 [job0] 00:16:59.434 filename=/dev/nvme0n1 00:16:59.434 [job1] 00:16:59.434 filename=/dev/nvme0n2 00:16:59.434 [job2] 00:16:59.434 filename=/dev/nvme0n3 00:16:59.434 [job3] 00:16:59.434 filename=/dev/nvme0n4 00:16:59.434 Could not set queue depth (nvme0n1) 00:16:59.434 Could not set queue depth (nvme0n2) 00:16:59.434 Could not set queue depth (nvme0n3) 00:16:59.434 Could not set queue depth (nvme0n4) 00:16:59.699 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:59.699 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:59.699 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:59.699 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:59.699 fio-3.35 00:16:59.699 Starting 4 threads 00:17:01.105 00:17:01.105 job0: (groupid=0, jobs=1): err= 0: pid=1113408: Tue Jun 11 09:31:32 2024 00:17:01.105 read: IOPS=3557, BW=13.9MiB/s (14.6MB/s)(14.1MiB/1016msec) 00:17:01.105 slat (nsec): min=1368, max=19594k, avg=99791.05, stdev=802368.24 00:17:01.105 clat (usec): min=6365, max=39750, avg=13042.49, stdev=4790.74 00:17:01.105 lat (usec): min=6374, max=51007, avg=13142.29, stdev=4869.81 00:17:01.105 clat percentiles (usec): 00:17:01.105 | 1.00th=[ 8094], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9896], 00:17:01.105 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11338], 60.00th=[12256], 00:17:01.105 | 70.00th=[13173], 80.00th=[15664], 90.00th=[20841], 95.00th=[22676], 00:17:01.105 | 99.00th=[33817], 99.50th=[35390], 99.90th=[35390], 99.95th=[37487], 00:17:01.105 | 99.99th=[39584] 00:17:01.105 write: IOPS=4031, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1016msec); 0 zone resets 00:17:01.105 slat (usec): min=2, max=30409, avg=151.94, stdev=1040.59 00:17:01.105 clat (msec): min=2, max=104, avg=19.90, stdev=21.21 00:17:01.105 lat (msec): min=2, max=104, avg=20.05, stdev=21.34 00:17:01.105 clat percentiles (msec): 00:17:01.105 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:17:01.105 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 13], 00:17:01.105 | 70.00th=[ 16], 80.00th=[ 25], 90.00th=[ 56], 95.00th=[ 78], 00:17:01.105 | 99.00th=[ 90], 99.50th=[ 95], 99.90th=[ 105], 99.95th=[ 105], 00:17:01.105 | 99.99th=[ 105] 00:17:01.105 bw ( KiB/s): min=15000, max=16984, per=17.63%, avg=15992.00, stdev=1402.90, samples=2 00:17:01.105 iops : min= 3750, max= 4246, avg=3998.00, stdev=350.72, samples=2 00:17:01.105 lat (msec) : 4=0.83%, 10=28.65%, 20=53.48%, 50=11.18%, 100=5.78% 00:17:01.105 lat (msec) : 250=0.08% 00:17:01.105 cpu : usr=2.66%, sys=4.93%, ctx=313, majf=0, minf=1 00:17:01.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:01.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:01.105 issued rwts: total=3614,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.105 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:01.105 job1: (groupid=0, jobs=1): err= 0: pid=1113410: Tue Jun 11 09:31:32 2024 00:17:01.105 read: IOPS=9078, BW=35.5MiB/s (37.2MB/s)(35.6MiB/1004msec) 00:17:01.105 slat (nsec): min=1333, max=9626.4k, avg=56037.45, stdev=403639.55 00:17:01.105 clat (usec): min=2106, max=23443, avg=7599.97, stdev=2293.92 00:17:01.105 lat (usec): min=2670, max=23457, avg=7656.01, stdev=2313.84 00:17:01.105 clat percentiles (usec): 00:17:01.105 | 1.00th=[ 3720], 5.00th=[ 5080], 10.00th=[ 5342], 20.00th=[ 5932], 00:17:01.105 | 30.00th=[ 6259], 40.00th=[ 6652], 50.00th=[ 7177], 60.00th=[ 7570], 00:17:01.105 | 70.00th=[ 8160], 80.00th=[ 8848], 90.00th=[10552], 95.00th=[11863], 00:17:01.105 | 99.00th=[15533], 99.50th=[17171], 99.90th=[18482], 99.95th=[18482], 00:17:01.105 | 99.99th=[23462] 00:17:01.105 write: IOPS=9179, BW=35.9MiB/s (37.6MB/s)(36.0MiB/1004msec); 0 zone resets 00:17:01.105 slat (usec): min=2, max=7938, avg=48.54, stdev=331.99 00:17:01.105 clat (usec): min=1165, max=16126, avg=6297.59, stdev=1910.69 00:17:01.105 lat (usec): min=1175, max=16130, avg=6346.14, stdev=1914.45 00:17:01.105 clat percentiles (usec): 00:17:01.105 | 1.00th=[ 2573], 5.00th=[ 3523], 10.00th=[ 4228], 20.00th=[ 5014], 00:17:01.105 | 30.00th=[ 5342], 40.00th=[ 5735], 50.00th=[ 6194], 60.00th=[ 6456], 00:17:01.105 | 70.00th=[ 6718], 80.00th=[ 7242], 90.00th=[ 8586], 95.00th=[10028], 00:17:01.105 | 99.00th=[13435], 99.50th=[14353], 99.90th=[15270], 99.95th=[15270], 00:17:01.105 | 99.99th=[16188] 00:17:01.105 bw ( KiB/s): min=36864, max=36864, per=40.64%, avg=36864.00, stdev= 0.00, samples=2 00:17:01.105 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:17:01.105 lat (msec) : 2=0.10%, 4=4.22%, 10=86.43%, 20=9.23%, 50=0.02% 00:17:01.105 cpu : usr=7.28%, sys=6.78%, ctx=655, majf=0, minf=1 00:17:01.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:17:01.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:01.105 issued rwts: total=9115,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.105 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:01.105 job2: (groupid=0, jobs=1): err= 0: pid=1113432: Tue Jun 11 09:31:32 2024 00:17:01.105 read: IOPS=4297, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1010msec) 00:17:01.105 slat (nsec): min=1394, max=13816k, avg=117656.47, stdev=870800.14 00:17:01.106 clat (usec): min=4807, max=27686, avg=14880.84, stdev=3387.94 00:17:01.106 lat (usec): min=4812, max=30963, avg=14998.49, stdev=3454.99 00:17:01.106 clat percentiles (usec): 00:17:01.106 | 1.00th=[ 7963], 5.00th=[11863], 10.00th=[12387], 20.00th=[12780], 00:17:01.106 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13829], 60.00th=[14484], 00:17:01.106 | 70.00th=[15533], 80.00th=[17433], 90.00th=[19792], 95.00th=[21890], 00:17:01.106 | 99.00th=[24773], 99.50th=[25560], 99.90th=[26346], 99.95th=[26346], 00:17:01.106 | 99.99th=[27657] 00:17:01.106 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:17:01.106 slat (usec): min=2, max=18931, avg=101.59, stdev=722.57 00:17:01.106 clat (usec): min=1158, max=34041, avg=13756.82, stdev=5092.12 00:17:01.106 lat (usec): min=1167, max=34076, avg=13858.41, stdev=5136.69 00:17:01.106 clat percentiles (usec): 00:17:01.106 | 1.00th=[ 3982], 5.00th=[ 7570], 10.00th=[ 8291], 20.00th=[ 9896], 00:17:01.106 | 30.00th=[11863], 40.00th=[12780], 50.00th=[13304], 60.00th=[13566], 00:17:01.106 | 70.00th=[14222], 80.00th=[16319], 90.00th=[20317], 95.00th=[26608], 00:17:01.106 | 99.00th=[28705], 99.50th=[30016], 99.90th=[33817], 99.95th=[33817], 00:17:01.106 | 99.99th=[33817] 00:17:01.106 bw ( KiB/s): min=18264, max=18600, per=20.32%, avg=18432.00, stdev=237.59, samples=2 00:17:01.106 iops : min= 4566, max= 4650, avg=4608.00, stdev=59.40, samples=2 00:17:01.106 lat (msec) : 2=0.02%, 4=0.53%, 10=11.56%, 20=77.86%, 50=10.04% 00:17:01.106 cpu : usr=3.87%, sys=4.76%, ctx=381, majf=0, minf=1 00:17:01.106 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:01.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:01.106 issued rwts: total=4340,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.106 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:01.106 job3: (groupid=0, jobs=1): err= 0: pid=1113439: Tue Jun 11 09:31:32 2024 00:17:01.106 read: IOPS=4937, BW=19.3MiB/s (20.2MB/s)(19.5MiB/1010msec) 00:17:01.106 slat (nsec): min=1315, max=24872k, avg=91554.05, stdev=897302.12 00:17:01.106 clat (usec): min=2167, max=57438, avg=14094.73, stdev=6218.59 00:17:01.106 lat (usec): min=2192, max=57464, avg=14186.29, stdev=6281.74 00:17:01.106 clat percentiles (usec): 00:17:01.106 | 1.00th=[ 2507], 5.00th=[ 7635], 10.00th=[ 8455], 20.00th=[10552], 00:17:01.106 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12649], 60.00th=[13173], 00:17:01.106 | 70.00th=[13829], 80.00th=[16909], 90.00th=[20579], 95.00th=[27132], 00:17:01.106 | 99.00th=[41681], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:17:01.106 | 99.99th=[57410] 00:17:01.106 write: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec); 0 zone resets 00:17:01.106 slat (usec): min=2, max=16855, avg=73.52, stdev=662.72 00:17:01.106 clat (usec): min=1008, max=30934, avg=11323.43, stdev=4298.59 00:17:01.106 lat (usec): min=1041, max=30937, avg=11396.95, stdev=4333.05 00:17:01.106 clat percentiles (usec): 00:17:01.106 | 1.00th=[ 2835], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 7767], 00:17:01.106 | 30.00th=[ 8848], 40.00th=[10028], 50.00th=[10945], 60.00th=[11731], 00:17:01.106 | 70.00th=[12518], 80.00th=[14746], 90.00th=[16188], 95.00th=[18220], 00:17:01.106 | 99.00th=[26870], 99.50th=[27657], 99.90th=[28443], 99.95th=[28443], 00:17:01.106 | 99.99th=[31065] 00:17:01.106 bw ( KiB/s): min=20472, max=20488, per=22.58%, avg=20480.00, stdev=11.31, samples=2 00:17:01.106 iops : min= 5118, max= 5122, avg=5120.00, stdev= 2.83, samples=2 00:17:01.106 lat (msec) : 2=0.10%, 4=1.60%, 10=25.39%, 20=66.13%, 50=6.77% 00:17:01.106 lat (msec) : 100=0.01% 00:17:01.106 cpu : usr=4.16%, sys=4.96%, ctx=304, majf=0, minf=2 00:17:01.106 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:01.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:01.106 issued rwts: total=4987,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.106 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:01.106 00:17:01.106 Run status group 0 (all jobs): 00:17:01.106 READ: bw=84.8MiB/s (88.9MB/s), 13.9MiB/s-35.5MiB/s (14.6MB/s-37.2MB/s), io=86.2MiB (90.3MB), run=1004-1016msec 00:17:01.106 WRITE: bw=88.6MiB/s (92.9MB/s), 15.7MiB/s-35.9MiB/s (16.5MB/s-37.6MB/s), io=90.0MiB (94.4MB), run=1004-1016msec 00:17:01.106 00:17:01.106 Disk stats (read/write): 00:17:01.106 nvme0n1: ios=3628/3759, merge=0/0, ticks=45395/51286, in_queue=96681, util=99.80% 00:17:01.106 nvme0n2: ios=7256/7680, merge=0/0, ticks=54441/47088, in_queue=101529, util=91.73% 00:17:01.106 nvme0n3: ios=3619/3677, merge=0/0, ticks=52855/48583, in_queue=101438, util=100.00% 00:17:01.106 nvme0n4: ios=4098/4175, merge=0/0, ticks=56381/46375, in_queue=102756, util=91.68% 00:17:01.106 09:31:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:01.106 09:31:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1113812 00:17:01.106 09:31:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:01.106 09:31:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:01.106 [global] 00:17:01.106 thread=1 00:17:01.106 invalidate=1 00:17:01.106 rw=read 00:17:01.106 time_based=1 00:17:01.106 runtime=10 00:17:01.106 ioengine=libaio 00:17:01.106 direct=1 00:17:01.106 bs=4096 00:17:01.106 iodepth=1 00:17:01.106 norandommap=1 00:17:01.106 numjobs=1 00:17:01.106 00:17:01.106 [job0] 00:17:01.106 filename=/dev/nvme0n1 00:17:01.106 [job1] 00:17:01.106 filename=/dev/nvme0n2 00:17:01.106 [job2] 00:17:01.106 filename=/dev/nvme0n3 00:17:01.106 [job3] 00:17:01.106 filename=/dev/nvme0n4 00:17:01.106 Could not set queue depth (nvme0n1) 00:17:01.106 Could not set queue depth (nvme0n2) 00:17:01.106 Could not set queue depth (nvme0n3) 00:17:01.106 Could not set queue depth (nvme0n4) 00:17:01.367 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:01.367 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:01.367 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:01.367 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:01.367 fio-3.35 00:17:01.367 Starting 4 threads 00:17:03.902 09:31:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:04.161 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=8269824, buflen=4096 00:17:04.161 fio: pid=1114137, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:04.161 09:31:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:04.420 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=311296, buflen=4096 00:17:04.420 fio: pid=1114111, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:04.420 09:31:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:04.420 09:31:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:04.420 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=7163904, buflen=4096 00:17:04.420 fio: pid=1114030, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:04.420 09:31:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:04.420 09:31:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:04.680 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=10944512, buflen=4096 00:17:04.680 fio: pid=1114056, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:04.680 09:31:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:04.680 09:31:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:04.680 00:17:04.680 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1114030: Tue Jun 11 09:31:36 2024 00:17:04.680 read: IOPS=590, BW=2361KiB/s (2418kB/s)(6996KiB/2963msec) 00:17:04.680 slat (usec): min=6, max=11135, avg=35.56, stdev=339.59 00:17:04.680 clat (usec): min=411, max=42048, avg=1652.73, stdev=5782.12 00:17:04.680 lat (usec): min=436, max=46088, avg=1688.30, stdev=5809.13 00:17:04.680 clat percentiles (usec): 00:17:04.680 | 1.00th=[ 523], 5.00th=[ 652], 10.00th=[ 701], 20.00th=[ 750], 00:17:04.680 | 30.00th=[ 783], 40.00th=[ 807], 50.00th=[ 832], 60.00th=[ 848], 00:17:04.680 | 70.00th=[ 873], 80.00th=[ 889], 90.00th=[ 922], 95.00th=[ 955], 00:17:04.680 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:04.680 | 99.99th=[42206] 00:17:04.680 bw ( KiB/s): min= 96, max= 4912, per=30.49%, avg=2507.20, stdev=2164.47, samples=5 00:17:04.680 iops : min= 24, max= 1228, avg=626.80, stdev=541.12, samples=5 00:17:04.680 lat (usec) : 500=0.63%, 750=19.54%, 1000=76.86% 00:17:04.680 lat (msec) : 2=0.86%, 50=2.06% 00:17:04.680 cpu : usr=0.44%, sys=1.69%, ctx=1753, majf=0, minf=1 00:17:04.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:04.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.680 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.680 issued rwts: total=1750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:04.680 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1114056: Tue Jun 11 09:31:36 2024 00:17:04.680 read: IOPS=843, BW=3372KiB/s (3453kB/s)(10.4MiB/3170msec) 00:17:04.680 slat (usec): min=6, max=21427, avg=59.90, stdev=745.69 00:17:04.680 clat (usec): min=699, max=41550, avg=1119.58, stdev=786.45 00:17:04.680 lat (usec): min=725, max=41577, avg=1179.49, stdev=1083.86 00:17:04.680 clat percentiles (usec): 00:17:04.680 | 1.00th=[ 898], 5.00th=[ 979], 10.00th=[ 1012], 20.00th=[ 1057], 00:17:04.680 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:17:04.680 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1205], 00:17:04.680 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1336], 99.95th=[ 3064], 00:17:04.680 | 99.99th=[41681] 00:17:04.680 bw ( KiB/s): min= 3217, max= 3536, per=42.03%, avg=3456.20, stdev=134.49, samples=5 00:17:04.680 iops : min= 804, max= 884, avg=864.00, stdev=33.73, samples=5 00:17:04.680 lat (usec) : 750=0.07%, 1000=7.67% 00:17:04.680 lat (msec) : 2=92.14%, 4=0.04%, 50=0.04% 00:17:04.680 cpu : usr=1.45%, sys=3.41%, ctx=2681, majf=0, minf=1 00:17:04.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:04.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.680 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.680 issued rwts: total=2673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:04.680 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1114111: Tue Jun 11 09:31:36 2024 00:17:04.680 read: IOPS=27, BW=110KiB/s (113kB/s)(304KiB/2760msec) 00:17:04.680 slat (usec): min=8, max=18524, avg=265.21, stdev=2108.19 00:17:04.680 clat (usec): min=799, max=42098, avg=36020.05, stdev=14469.89 00:17:04.680 lat (usec): min=836, max=59825, avg=36288.41, stdev=14712.43 00:17:04.680 clat percentiles (usec): 00:17:04.680 | 1.00th=[ 799], 5.00th=[ 1057], 10.00th=[ 1139], 20.00th=[41681], 00:17:04.680 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:04.680 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:04.680 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:04.680 | 99.99th=[42206] 00:17:04.680 bw ( KiB/s): min= 96, max= 168, per=1.36%, avg=112.00, stdev=31.50, samples=5 00:17:04.680 iops : min= 24, max= 42, avg=28.00, stdev= 7.87, samples=5 00:17:04.680 lat (usec) : 1000=2.60% 00:17:04.680 lat (msec) : 2=11.69%, 50=84.42% 00:17:04.680 cpu : usr=0.14%, sys=0.00%, ctx=78, majf=0, minf=1 00:17:04.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:04.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.680 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.680 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:04.680 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1114137: Tue Jun 11 09:31:36 2024 00:17:04.680 read: IOPS=801, BW=3206KiB/s (3283kB/s)(8076KiB/2519msec) 00:17:04.680 slat (nsec): min=7509, max=61284, avg=26122.82, stdev=3440.70 00:17:04.680 clat (usec): min=409, max=3805, avg=1214.62, stdev=167.72 00:17:04.680 lat (usec): min=435, max=3836, avg=1240.75, stdev=167.69 00:17:04.680 clat percentiles (usec): 00:17:04.680 | 1.00th=[ 660], 5.00th=[ 930], 10.00th=[ 1037], 20.00th=[ 1106], 00:17:04.680 | 30.00th=[ 1139], 40.00th=[ 1188], 50.00th=[ 1254], 60.00th=[ 1287], 00:17:04.680 | 70.00th=[ 1319], 80.00th=[ 1336], 90.00th=[ 1369], 95.00th=[ 1401], 00:17:04.680 | 99.00th=[ 1450], 99.50th=[ 1467], 99.90th=[ 1500], 99.95th=[ 1500], 00:17:04.680 | 99.99th=[ 3818] 00:17:04.680 bw ( KiB/s): min= 3161, max= 3328, per=39.05%, avg=3211.40, stdev=66.89, samples=5 00:17:04.680 iops : min= 790, max= 832, avg=802.80, stdev=16.77, samples=5 00:17:04.680 lat (usec) : 500=0.20%, 750=1.88%, 1000=5.79% 00:17:04.680 lat (msec) : 2=92.03%, 4=0.05% 00:17:04.680 cpu : usr=1.23%, sys=3.34%, ctx=2021, majf=0, minf=2 00:17:04.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:04.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.680 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.680 issued rwts: total=2020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:04.680 00:17:04.680 Run status group 0 (all jobs): 00:17:04.681 READ: bw=8222KiB/s (8419kB/s), 110KiB/s-3372KiB/s (113kB/s-3453kB/s), io=25.5MiB (26.7MB), run=2519-3170msec 00:17:04.681 00:17:04.681 Disk stats (read/write): 00:17:04.681 nvme0n1: ios=1693/0, merge=0/0, ticks=2611/0, in_queue=2611, util=90.88% 00:17:04.681 nvme0n2: ios=2525/0, merge=0/0, ticks=2567/0, in_queue=2567, util=91.07% 00:17:04.681 nvme0n3: ios=75/0, merge=0/0, ticks=2696/0, in_queue=2696, util=94.98% 00:17:04.681 nvme0n4: ios=1996/0, merge=0/0, ticks=2190/0, in_queue=2190, util=96.29% 00:17:04.962 09:31:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:04.962 09:31:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:05.228 09:31:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:05.228 09:31:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:05.487 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:05.487 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:05.487 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:05.487 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:05.747 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:05.747 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1113812 00:17:05.747 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:05.747 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:06.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.007 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:06.007 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:17:06.007 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:17:06.007 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:06.007 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:17:06.007 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:06.007 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:17:06.007 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:06.007 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:06.007 nvmf hotplug test: fio failed as expected 00:17:06.007 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:06.267 rmmod nvme_tcp 00:17:06.267 rmmod nvme_fabrics 00:17:06.267 rmmod nvme_keyring 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1109265 ']' 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1109265 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 1109265 ']' 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 1109265 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1109265 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1109265' 00:17:06.267 killing process with pid 1109265 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 1109265 00:17:06.267 09:31:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 1109265 00:17:06.528 09:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.528 09:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.528 09:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.528 09:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.528 09:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.528 09:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.528 09:31:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.528 09:31:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.443 09:31:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:08.443 00:17:08.443 real 0m29.645s 00:17:08.443 user 2m43.095s 00:17:08.443 sys 0m9.280s 00:17:08.443 09:31:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:08.443 09:31:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.443 ************************************ 00:17:08.443 END TEST nvmf_fio_target 00:17:08.443 ************************************ 00:17:08.443 09:31:40 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:08.443 09:31:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:08.443 09:31:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:08.443 09:31:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:08.704 ************************************ 00:17:08.704 START TEST nvmf_bdevio 00:17:08.704 ************************************ 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:08.704 * Looking for test storage... 00:17:08.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.704 09:31:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:16.850 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.850 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.850 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.850 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.850 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.850 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:16.851 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:16.851 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:16.851 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:16.851 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:17:16.851 00:17:16.851 --- 10.0.0.2 ping statistics --- 00:17:16.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.851 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.393 ms 00:17:16.851 00:17:16.851 --- 10.0.0.1 ping statistics --- 00:17:16.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.851 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1121004 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1121004 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 1121004 ']' 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:16.851 09:31:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:16.851 [2024-06-11 09:31:47.804803] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:17:16.851 [2024-06-11 09:31:47.804866] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.851 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.852 [2024-06-11 09:31:47.892160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.852 [2024-06-11 09:31:47.989240] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.852 [2024-06-11 09:31:47.989291] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.852 [2024-06-11 09:31:47.989299] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.852 [2024-06-11 09:31:47.989306] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.852 [2024-06-11 09:31:47.989312] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.852 [2024-06-11 09:31:47.989517] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:17:16.852 [2024-06-11 09:31:47.989855] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:17:16.852 [2024-06-11 09:31:47.990028] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:17:16.852 [2024-06-11 09:31:47.990030] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.852 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:16.852 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:17:16.852 09:31:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.852 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:16.852 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.114 [2024-06-11 09:31:48.705647] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.114 Malloc0 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.114 [2024-06-11 09:31:48.771360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.114 { 00:17:17.114 "params": { 00:17:17.114 "name": "Nvme$subsystem", 00:17:17.114 "trtype": "$TEST_TRANSPORT", 00:17:17.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.114 "adrfam": "ipv4", 00:17:17.114 "trsvcid": "$NVMF_PORT", 00:17:17.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.114 "hdgst": ${hdgst:-false}, 00:17:17.114 "ddgst": ${ddgst:-false} 00:17:17.114 }, 00:17:17.114 "method": "bdev_nvme_attach_controller" 00:17:17.114 } 00:17:17.114 EOF 00:17:17.114 )") 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:17.114 09:31:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:17.114 "params": { 00:17:17.114 "name": "Nvme1", 00:17:17.114 "trtype": "tcp", 00:17:17.114 "traddr": "10.0.0.2", 00:17:17.114 "adrfam": "ipv4", 00:17:17.114 "trsvcid": "4420", 00:17:17.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.114 "hdgst": false, 00:17:17.114 "ddgst": false 00:17:17.114 }, 00:17:17.114 "method": "bdev_nvme_attach_controller" 00:17:17.114 }' 00:17:17.114 [2024-06-11 09:31:48.828817] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:17:17.114 [2024-06-11 09:31:48.828883] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121203 ] 00:17:17.114 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.114 [2024-06-11 09:31:48.909460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:17.375 [2024-06-11 09:31:49.007028] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.375 [2024-06-11 09:31:49.007162] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.375 [2024-06-11 09:31:49.007164] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.636 I/O targets: 00:17:17.636 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:17.636 00:17:17.636 00:17:17.636 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.636 http://cunit.sourceforge.net/ 00:17:17.636 00:17:17.636 00:17:17.636 Suite: bdevio tests on: Nvme1n1 00:17:17.636 Test: blockdev write read block ...passed 00:17:17.636 Test: blockdev write zeroes read block ...passed 00:17:17.636 Test: blockdev write zeroes read no split ...passed 00:17:17.636 Test: blockdev write zeroes read split ...passed 00:17:17.897 Test: blockdev write zeroes read split partial ...passed 00:17:17.897 Test: blockdev reset ...[2024-06-11 09:31:49.460343] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:17.897 [2024-06-11 09:31:49.460408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65b560 (9): Bad file descriptor 00:17:17.897 [2024-06-11 09:31:49.471606] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:17.897 passed 00:17:17.897 Test: blockdev write read 8 blocks ...passed 00:17:17.897 Test: blockdev write read size > 128k ...passed 00:17:17.897 Test: blockdev write read invalid size ...passed 00:17:17.897 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:17.897 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:17.897 Test: blockdev write read max offset ...passed 00:17:17.897 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:17.897 Test: blockdev writev readv 8 blocks ...passed 00:17:17.897 Test: blockdev writev readv 30 x 1block ...passed 00:17:17.897 Test: blockdev writev readv block ...passed 00:17:17.897 Test: blockdev writev readv size > 128k ...passed 00:17:17.897 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:17.897 Test: blockdev comparev and writev ...[2024-06-11 09:31:49.688616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.897 [2024-06-11 09:31:49.688644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.897 [2024-06-11 09:31:49.688658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.897 [2024-06-11 09:31:49.688666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:17.897 [2024-06-11 09:31:49.689026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.897 [2024-06-11 09:31:49.689036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:17.897 [2024-06-11 09:31:49.689050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.897 [2024-06-11 09:31:49.689059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:17.897 [2024-06-11 09:31:49.689422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.897 [2024-06-11 09:31:49.689431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:17.897 [2024-06-11 09:31:49.689444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.897 [2024-06-11 09:31:49.689453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:17.897 [2024-06-11 09:31:49.689802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.897 [2024-06-11 09:31:49.689810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:17.897 [2024-06-11 09:31:49.689824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.897 [2024-06-11 09:31:49.689832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:18.159 passed 00:17:18.159 Test: blockdev nvme passthru rw ...passed 00:17:18.159 Test: blockdev nvme passthru vendor specific ...[2024-06-11 09:31:49.772692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:18.159 [2024-06-11 09:31:49.772707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:18.159 [2024-06-11 09:31:49.772958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:18.159 [2024-06-11 09:31:49.772967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:18.159 [2024-06-11 09:31:49.773216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:18.159 [2024-06-11 09:31:49.773224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:18.159 [2024-06-11 09:31:49.773480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:18.159 [2024-06-11 09:31:49.773488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:18.159 passed 00:17:18.159 Test: blockdev nvme admin passthru ...passed 00:17:18.159 Test: blockdev copy ...passed 00:17:18.159 00:17:18.159 Run Summary: Type Total Ran Passed Failed Inactive 00:17:18.159 suites 1 1 n/a 0 0 00:17:18.159 tests 23 23 23 0 0 00:17:18.159 asserts 152 152 152 0 n/a 00:17:18.159 00:17:18.159 Elapsed time = 1.074 seconds 00:17:18.159 09:31:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.159 09:31:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:18.159 09:31:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:18.159 09:31:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:18.159 09:31:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:18.159 09:31:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:18.159 09:31:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:18.159 09:31:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:18.159 09:31:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:18.159 09:31:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:18.159 09:31:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:18.159 09:31:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:18.159 rmmod nvme_tcp 00:17:18.419 rmmod nvme_fabrics 00:17:18.419 rmmod nvme_keyring 00:17:18.419 09:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:18.419 09:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:18.419 09:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:18.419 09:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1121004 ']' 00:17:18.419 09:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1121004 00:17:18.419 09:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 1121004 ']' 00:17:18.419 09:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 1121004 00:17:18.419 09:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:17:18.419 09:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:18.420 09:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1121004 00:17:18.420 09:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:17:18.420 09:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:17:18.420 09:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1121004' 00:17:18.420 killing process with pid 1121004 00:17:18.420 09:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 1121004 00:17:18.420 09:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 1121004 00:17:18.680 09:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:18.680 09:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:18.680 09:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:18.680 09:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:18.680 09:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:18.680 09:31:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.680 09:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.680 09:31:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.596 09:31:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:20.596 00:17:20.596 real 0m12.038s 00:17:20.596 user 0m13.347s 00:17:20.596 sys 0m6.030s 00:17:20.596 09:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:20.596 09:31:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:20.596 ************************************ 00:17:20.596 END TEST nvmf_bdevio 00:17:20.596 ************************************ 00:17:20.596 09:31:52 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:20.596 09:31:52 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:20.596 09:31:52 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:20.596 09:31:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:20.596 ************************************ 00:17:20.596 START TEST nvmf_auth_target 00:17:20.596 ************************************ 00:17:20.596 09:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:20.857 * Looking for test storage... 00:17:20.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.857 09:31:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:20.858 09:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:27.456 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:27.456 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:27.456 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:27.457 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:27.457 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.457 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.719 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.719 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.719 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:27.719 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.719 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.719 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.719 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:27.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:17:27.719 00:17:27.719 --- 10.0.0.2 ping statistics --- 00:17:27.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.719 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:17:27.719 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:17:27.719 00:17:27.719 --- 10.0.0.1 ping statistics --- 00:17:27.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.719 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:17:27.719 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.979 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:27.979 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:27.979 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.979 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:27.979 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:27.979 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.979 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:27.979 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:27.979 09:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:27.979 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:27.979 09:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:27.979 09:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.979 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1126504 00:17:27.979 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1126504 00:17:27.980 09:31:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:27.980 09:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1126504 ']' 00:17:27.980 09:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.980 09:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:27.980 09:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.980 09:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:27.980 09:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1126846 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:28.921 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2fc83c7ef04f0e4bbd611604e6888040c6114db74e179085 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hjv 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2fc83c7ef04f0e4bbd611604e6888040c6114db74e179085 0 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2fc83c7ef04f0e4bbd611604e6888040c6114db74e179085 0 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2fc83c7ef04f0e4bbd611604e6888040c6114db74e179085 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hjv 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hjv 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.hjv 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2ff07490049806e969b6dade52713d434873c2058826c6dbc62157ea26495687 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.RVK 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2ff07490049806e969b6dade52713d434873c2058826c6dbc62157ea26495687 3 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2ff07490049806e969b6dade52713d434873c2058826c6dbc62157ea26495687 3 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2ff07490049806e969b6dade52713d434873c2058826c6dbc62157ea26495687 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.RVK 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.RVK 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.RVK 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fd0c6847741c6b1fd69357a528f0cebf 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Zz1 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fd0c6847741c6b1fd69357a528f0cebf 1 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fd0c6847741c6b1fd69357a528f0cebf 1 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fd0c6847741c6b1fd69357a528f0cebf 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Zz1 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Zz1 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Zz1 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dfc26e7f32da27f79f460e09461edbc95d5dc820eb8e3705 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.CG6 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dfc26e7f32da27f79f460e09461edbc95d5dc820eb8e3705 2 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dfc26e7f32da27f79f460e09461edbc95d5dc820eb8e3705 2 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dfc26e7f32da27f79f460e09461edbc95d5dc820eb8e3705 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:28.922 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.CG6 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.CG6 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.CG6 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d590a2390f8f468fc5b8ab9bd97a570906a8c56fcd147435 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ugt 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d590a2390f8f468fc5b8ab9bd97a570906a8c56fcd147435 2 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d590a2390f8f468fc5b8ab9bd97a570906a8c56fcd147435 2 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d590a2390f8f468fc5b8ab9bd97a570906a8c56fcd147435 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ugt 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ugt 00:17:29.183 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ugt 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b883ba4219d840c44e7b1d4d3b1d350e 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dWg 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b883ba4219d840c44e7b1d4d3b1d350e 1 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b883ba4219d840c44e7b1d4d3b1d350e 1 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b883ba4219d840c44e7b1d4d3b1d350e 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dWg 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dWg 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.dWg 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0b5ed01b1f705bb879e3d3daa6420ed6271180ea3c402be0e5955f7bbc2a091d 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bcl 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0b5ed01b1f705bb879e3d3daa6420ed6271180ea3c402be0e5955f7bbc2a091d 3 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0b5ed01b1f705bb879e3d3daa6420ed6271180ea3c402be0e5955f7bbc2a091d 3 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0b5ed01b1f705bb879e3d3daa6420ed6271180ea3c402be0e5955f7bbc2a091d 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bcl 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bcl 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.bcl 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1126504 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1126504 ']' 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:29.184 09:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.444 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:29.444 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:29.444 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1126846 /var/tmp/host.sock 00:17:29.444 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1126846 ']' 00:17:29.444 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:17:29.444 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:29.444 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:29.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:29.444 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:29.444 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.705 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:29.705 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:29.705 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:29.705 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.705 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.705 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.705 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:29.705 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hjv 00:17:29.705 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.705 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.705 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.705 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.hjv 00:17:29.705 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.hjv 00:17:29.966 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.RVK ]] 00:17:29.966 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RVK 00:17:29.966 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.966 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.966 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.966 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RVK 00:17:29.966 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RVK 00:17:29.966 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:29.966 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Zz1 00:17:29.966 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.966 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.227 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.227 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Zz1 00:17:30.227 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Zz1 00:17:30.227 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.CG6 ]] 00:17:30.227 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CG6 00:17:30.227 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.227 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.227 09:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.227 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CG6 00:17:30.227 09:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CG6 00:17:30.487 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:30.487 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ugt 00:17:30.487 09:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.487 09:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.487 09:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.487 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ugt 00:17:30.487 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ugt 00:17:30.746 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.dWg ]] 00:17:30.746 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dWg 00:17:30.746 09:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.746 09:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.746 09:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.746 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dWg 00:17:30.746 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dWg 00:17:31.007 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:31.007 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bcl 00:17:31.007 09:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.007 09:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.007 09:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.007 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.bcl 00:17:31.007 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.bcl 00:17:31.267 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:31.267 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:31.267 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.267 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.267 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:31.267 09:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:31.267 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:31.267 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.267 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.267 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:31.267 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.267 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.267 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.267 09:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.267 09:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.267 09:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.267 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.267 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.528 00:17:31.528 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.528 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.528 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.802 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.802 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.802 09:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.802 09:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.802 09:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.802 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.802 { 00:17:31.802 "cntlid": 1, 00:17:31.802 "qid": 0, 00:17:31.802 "state": "enabled", 00:17:31.802 "listen_address": { 00:17:31.802 "trtype": "TCP", 00:17:31.802 "adrfam": "IPv4", 00:17:31.802 "traddr": "10.0.0.2", 00:17:31.802 "trsvcid": "4420" 00:17:31.802 }, 00:17:31.802 "peer_address": { 00:17:31.802 "trtype": "TCP", 00:17:31.802 "adrfam": "IPv4", 00:17:31.802 "traddr": "10.0.0.1", 00:17:31.802 "trsvcid": "34088" 00:17:31.802 }, 00:17:31.802 "auth": { 00:17:31.802 "state": "completed", 00:17:31.802 "digest": "sha256", 00:17:31.802 "dhgroup": "null" 00:17:31.802 } 00:17:31.802 } 00:17:31.802 ]' 00:17:31.802 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.802 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.802 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.111 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:32.111 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.111 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.111 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.111 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.111 09:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.055 09:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.316 00:17:33.316 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.316 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.316 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.576 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.577 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.577 09:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.577 09:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.577 09:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.577 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.577 { 00:17:33.577 "cntlid": 3, 00:17:33.577 "qid": 0, 00:17:33.577 "state": "enabled", 00:17:33.577 "listen_address": { 00:17:33.577 "trtype": "TCP", 00:17:33.577 "adrfam": "IPv4", 00:17:33.577 "traddr": "10.0.0.2", 00:17:33.577 "trsvcid": "4420" 00:17:33.577 }, 00:17:33.577 "peer_address": { 00:17:33.577 "trtype": "TCP", 00:17:33.577 "adrfam": "IPv4", 00:17:33.577 "traddr": "10.0.0.1", 00:17:33.577 "trsvcid": "34114" 00:17:33.577 }, 00:17:33.577 "auth": { 00:17:33.577 "state": "completed", 00:17:33.577 "digest": "sha256", 00:17:33.577 "dhgroup": "null" 00:17:33.577 } 00:17:33.577 } 00:17:33.577 ]' 00:17:33.577 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.577 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.577 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.836 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:33.836 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.836 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.836 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.836 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.095 09:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:17:34.666 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.666 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.666 09:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.666 09:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.666 09:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.666 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.666 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:34.666 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:34.927 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:34.927 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.927 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:34.927 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:34.927 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:34.927 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.927 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.927 09:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.927 09:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.927 09:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.927 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.927 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.188 00:17:35.188 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.188 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.188 09:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.449 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.449 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.449 09:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:35.449 09:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.449 09:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:35.449 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.449 { 00:17:35.449 "cntlid": 5, 00:17:35.449 "qid": 0, 00:17:35.449 "state": "enabled", 00:17:35.449 "listen_address": { 00:17:35.449 "trtype": "TCP", 00:17:35.449 "adrfam": "IPv4", 00:17:35.449 "traddr": "10.0.0.2", 00:17:35.449 "trsvcid": "4420" 00:17:35.449 }, 00:17:35.449 "peer_address": { 00:17:35.449 "trtype": "TCP", 00:17:35.449 "adrfam": "IPv4", 00:17:35.449 "traddr": "10.0.0.1", 00:17:35.449 "trsvcid": "34146" 00:17:35.449 }, 00:17:35.449 "auth": { 00:17:35.449 "state": "completed", 00:17:35.449 "digest": "sha256", 00:17:35.449 "dhgroup": "null" 00:17:35.449 } 00:17:35.449 } 00:17:35.449 ]' 00:17:35.449 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.449 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.449 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.449 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:35.449 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.449 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.449 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.449 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.710 09:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.653 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.914 00:17:36.914 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.914 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.914 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.186 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.186 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.186 09:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.186 09:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.186 09:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.186 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.186 { 00:17:37.186 "cntlid": 7, 00:17:37.186 "qid": 0, 00:17:37.186 "state": "enabled", 00:17:37.186 "listen_address": { 00:17:37.186 "trtype": "TCP", 00:17:37.187 "adrfam": "IPv4", 00:17:37.187 "traddr": "10.0.0.2", 00:17:37.187 "trsvcid": "4420" 00:17:37.187 }, 00:17:37.187 "peer_address": { 00:17:37.187 "trtype": "TCP", 00:17:37.187 "adrfam": "IPv4", 00:17:37.187 "traddr": "10.0.0.1", 00:17:37.187 "trsvcid": "34166" 00:17:37.187 }, 00:17:37.187 "auth": { 00:17:37.187 "state": "completed", 00:17:37.187 "digest": "sha256", 00:17:37.187 "dhgroup": "null" 00:17:37.187 } 00:17:37.187 } 00:17:37.187 ]' 00:17:37.187 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.187 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.187 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.187 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:37.187 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.453 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.453 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.453 09:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.453 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:17:38.394 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.394 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.394 09:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.394 09:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.394 09:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.394 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.394 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.394 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:38.394 09:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:38.394 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:38.394 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.394 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:38.394 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:38.394 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:38.394 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.394 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.394 09:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.394 09:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.394 09:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.394 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.394 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.655 00:17:38.655 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.655 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.655 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.915 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.915 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.915 09:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.915 09:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.915 09:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.915 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.915 { 00:17:38.915 "cntlid": 9, 00:17:38.915 "qid": 0, 00:17:38.915 "state": "enabled", 00:17:38.915 "listen_address": { 00:17:38.915 "trtype": "TCP", 00:17:38.915 "adrfam": "IPv4", 00:17:38.915 "traddr": "10.0.0.2", 00:17:38.915 "trsvcid": "4420" 00:17:38.915 }, 00:17:38.915 "peer_address": { 00:17:38.915 "trtype": "TCP", 00:17:38.915 "adrfam": "IPv4", 00:17:38.915 "traddr": "10.0.0.1", 00:17:38.915 "trsvcid": "34438" 00:17:38.915 }, 00:17:38.915 "auth": { 00:17:38.915 "state": "completed", 00:17:38.915 "digest": "sha256", 00:17:38.915 "dhgroup": "ffdhe2048" 00:17:38.915 } 00:17:38.915 } 00:17:38.915 ]' 00:17:38.915 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.915 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.915 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.175 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.175 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.175 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.175 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.175 09:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.435 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:17:40.005 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.005 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.005 09:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.005 09:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.005 09:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.005 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.005 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.005 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:40.266 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:40.266 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.266 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:40.266 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:40.266 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:40.266 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.266 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.266 09:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.266 09:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.266 09:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.266 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.266 09:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.534 00:17:40.534 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.534 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.534 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.793 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.793 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.793 09:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.793 09:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.793 09:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.793 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.793 { 00:17:40.793 "cntlid": 11, 00:17:40.793 "qid": 0, 00:17:40.793 "state": "enabled", 00:17:40.793 "listen_address": { 00:17:40.793 "trtype": "TCP", 00:17:40.793 "adrfam": "IPv4", 00:17:40.793 "traddr": "10.0.0.2", 00:17:40.793 "trsvcid": "4420" 00:17:40.793 }, 00:17:40.793 "peer_address": { 00:17:40.793 "trtype": "TCP", 00:17:40.793 "adrfam": "IPv4", 00:17:40.793 "traddr": "10.0.0.1", 00:17:40.793 "trsvcid": "34464" 00:17:40.793 }, 00:17:40.793 "auth": { 00:17:40.793 "state": "completed", 00:17:40.793 "digest": "sha256", 00:17:40.793 "dhgroup": "ffdhe2048" 00:17:40.793 } 00:17:40.793 } 00:17:40.793 ]' 00:17:40.793 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.793 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.793 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.793 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.793 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.793 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.793 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.793 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.053 09:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:17:41.995 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.995 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.995 09:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.995 09:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.995 09:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.995 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.995 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:41.995 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:41.995 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:41.995 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.995 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.996 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:41.996 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:41.996 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.996 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.996 09:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.996 09:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.996 09:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.996 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.996 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.257 00:17:42.257 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.257 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.257 09:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.517 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.517 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.517 09:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.517 09:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.518 09:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.518 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.518 { 00:17:42.518 "cntlid": 13, 00:17:42.518 "qid": 0, 00:17:42.518 "state": "enabled", 00:17:42.518 "listen_address": { 00:17:42.518 "trtype": "TCP", 00:17:42.518 "adrfam": "IPv4", 00:17:42.518 "traddr": "10.0.0.2", 00:17:42.518 "trsvcid": "4420" 00:17:42.518 }, 00:17:42.518 "peer_address": { 00:17:42.518 "trtype": "TCP", 00:17:42.518 "adrfam": "IPv4", 00:17:42.518 "traddr": "10.0.0.1", 00:17:42.518 "trsvcid": "34488" 00:17:42.518 }, 00:17:42.518 "auth": { 00:17:42.518 "state": "completed", 00:17:42.518 "digest": "sha256", 00:17:42.518 "dhgroup": "ffdhe2048" 00:17:42.518 } 00:17:42.518 } 00:17:42.518 ]' 00:17:42.518 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.518 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.518 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.518 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:42.518 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.518 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.518 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.518 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.778 09:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.721 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.982 00:17:43.982 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.982 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.982 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.243 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.243 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.243 09:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.243 09:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.243 09:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.243 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.243 { 00:17:44.243 "cntlid": 15, 00:17:44.243 "qid": 0, 00:17:44.243 "state": "enabled", 00:17:44.243 "listen_address": { 00:17:44.243 "trtype": "TCP", 00:17:44.243 "adrfam": "IPv4", 00:17:44.243 "traddr": "10.0.0.2", 00:17:44.243 "trsvcid": "4420" 00:17:44.243 }, 00:17:44.243 "peer_address": { 00:17:44.243 "trtype": "TCP", 00:17:44.243 "adrfam": "IPv4", 00:17:44.243 "traddr": "10.0.0.1", 00:17:44.243 "trsvcid": "34512" 00:17:44.243 }, 00:17:44.243 "auth": { 00:17:44.243 "state": "completed", 00:17:44.243 "digest": "sha256", 00:17:44.243 "dhgroup": "ffdhe2048" 00:17:44.243 } 00:17:44.243 } 00:17:44.243 ]' 00:17:44.243 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.243 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.243 09:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.243 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:44.243 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.503 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.504 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.504 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.504 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:17:45.444 09:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.444 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.444 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.444 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.444 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.444 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.444 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.445 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:45.445 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:45.445 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:45.445 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.445 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.445 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:45.445 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:45.445 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.445 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.445 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.445 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.445 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.445 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.445 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.706 00:17:45.967 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.967 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.967 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.967 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.967 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.967 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.967 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.967 09:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.967 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.967 { 00:17:45.967 "cntlid": 17, 00:17:45.967 "qid": 0, 00:17:45.967 "state": "enabled", 00:17:45.967 "listen_address": { 00:17:45.967 "trtype": "TCP", 00:17:45.967 "adrfam": "IPv4", 00:17:45.967 "traddr": "10.0.0.2", 00:17:45.967 "trsvcid": "4420" 00:17:45.967 }, 00:17:45.967 "peer_address": { 00:17:45.967 "trtype": "TCP", 00:17:45.967 "adrfam": "IPv4", 00:17:45.967 "traddr": "10.0.0.1", 00:17:45.967 "trsvcid": "34538" 00:17:45.967 }, 00:17:45.967 "auth": { 00:17:45.967 "state": "completed", 00:17:45.967 "digest": "sha256", 00:17:45.967 "dhgroup": "ffdhe3072" 00:17:45.967 } 00:17:45.967 } 00:17:45.967 ]' 00:17:45.967 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.228 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.228 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.228 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:46.228 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.228 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.228 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.228 09:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.489 09:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:17:47.059 09:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.059 09:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.059 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.059 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.059 09:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.059 09:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.059 09:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:47.059 09:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:47.320 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:47.320 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.320 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.320 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:47.320 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:47.320 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.320 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.320 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.320 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.320 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.320 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.320 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.581 00:17:47.581 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.581 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.581 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.842 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.842 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.842 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.842 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.842 09:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.842 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.842 { 00:17:47.842 "cntlid": 19, 00:17:47.842 "qid": 0, 00:17:47.842 "state": "enabled", 00:17:47.842 "listen_address": { 00:17:47.842 "trtype": "TCP", 00:17:47.842 "adrfam": "IPv4", 00:17:47.842 "traddr": "10.0.0.2", 00:17:47.842 "trsvcid": "4420" 00:17:47.842 }, 00:17:47.842 "peer_address": { 00:17:47.842 "trtype": "TCP", 00:17:47.842 "adrfam": "IPv4", 00:17:47.842 "traddr": "10.0.0.1", 00:17:47.842 "trsvcid": "34562" 00:17:47.842 }, 00:17:47.842 "auth": { 00:17:47.842 "state": "completed", 00:17:47.842 "digest": "sha256", 00:17:47.842 "dhgroup": "ffdhe3072" 00:17:47.842 } 00:17:47.842 } 00:17:47.842 ]' 00:17:47.842 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.842 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.842 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.842 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:47.842 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.103 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.103 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.103 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.103 09:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.073 09:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.334 00:17:49.334 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.334 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.334 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.594 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.594 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.594 09:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.594 09:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.594 09:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.594 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.594 { 00:17:49.594 "cntlid": 21, 00:17:49.594 "qid": 0, 00:17:49.594 "state": "enabled", 00:17:49.594 "listen_address": { 00:17:49.594 "trtype": "TCP", 00:17:49.594 "adrfam": "IPv4", 00:17:49.594 "traddr": "10.0.0.2", 00:17:49.594 "trsvcid": "4420" 00:17:49.594 }, 00:17:49.594 "peer_address": { 00:17:49.594 "trtype": "TCP", 00:17:49.594 "adrfam": "IPv4", 00:17:49.594 "traddr": "10.0.0.1", 00:17:49.594 "trsvcid": "37658" 00:17:49.594 }, 00:17:49.594 "auth": { 00:17:49.594 "state": "completed", 00:17:49.595 "digest": "sha256", 00:17:49.595 "dhgroup": "ffdhe3072" 00:17:49.595 } 00:17:49.595 } 00:17:49.595 ]' 00:17:49.595 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.854 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.854 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.854 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.854 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.854 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.854 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.854 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.115 09:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:17:50.686 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.686 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.686 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.686 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.686 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.686 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.686 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:50.686 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:50.947 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:50.947 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.947 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.947 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:50.947 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:50.947 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.947 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:50.947 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.947 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.947 09:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.947 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.947 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.208 00:17:51.208 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.208 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.208 09:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.468 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.468 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.468 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.468 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.468 09:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.468 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.468 { 00:17:51.468 "cntlid": 23, 00:17:51.468 "qid": 0, 00:17:51.468 "state": "enabled", 00:17:51.468 "listen_address": { 00:17:51.468 "trtype": "TCP", 00:17:51.468 "adrfam": "IPv4", 00:17:51.468 "traddr": "10.0.0.2", 00:17:51.469 "trsvcid": "4420" 00:17:51.469 }, 00:17:51.469 "peer_address": { 00:17:51.469 "trtype": "TCP", 00:17:51.469 "adrfam": "IPv4", 00:17:51.469 "traddr": "10.0.0.1", 00:17:51.469 "trsvcid": "37692" 00:17:51.469 }, 00:17:51.469 "auth": { 00:17:51.469 "state": "completed", 00:17:51.469 "digest": "sha256", 00:17:51.469 "dhgroup": "ffdhe3072" 00:17:51.469 } 00:17:51.469 } 00:17:51.469 ]' 00:17:51.469 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.469 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.469 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.469 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.469 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.729 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.729 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.729 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.729 09:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.671 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.242 00:17:53.242 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.242 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.242 09:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.243 09:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.243 09:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.243 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.243 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.243 09:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.243 09:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.243 { 00:17:53.243 "cntlid": 25, 00:17:53.243 "qid": 0, 00:17:53.243 "state": "enabled", 00:17:53.243 "listen_address": { 00:17:53.243 "trtype": "TCP", 00:17:53.243 "adrfam": "IPv4", 00:17:53.243 "traddr": "10.0.0.2", 00:17:53.243 "trsvcid": "4420" 00:17:53.243 }, 00:17:53.243 "peer_address": { 00:17:53.243 "trtype": "TCP", 00:17:53.243 "adrfam": "IPv4", 00:17:53.243 "traddr": "10.0.0.1", 00:17:53.243 "trsvcid": "37722" 00:17:53.243 }, 00:17:53.243 "auth": { 00:17:53.243 "state": "completed", 00:17:53.243 "digest": "sha256", 00:17:53.243 "dhgroup": "ffdhe4096" 00:17:53.243 } 00:17:53.243 } 00:17:53.243 ]' 00:17:53.243 09:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.503 09:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.503 09:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.503 09:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:53.503 09:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.503 09:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.503 09:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.503 09:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.763 09:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:17:54.334 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.334 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.334 09:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.334 09:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.334 09:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.334 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.334 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:54.334 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:54.594 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:54.594 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.594 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.594 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:54.594 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:54.594 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.594 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.594 09:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.594 09:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.594 09:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.594 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.594 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.854 00:17:54.854 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.854 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.854 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.115 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.115 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.115 09:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.115 09:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.115 09:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.115 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.115 { 00:17:55.115 "cntlid": 27, 00:17:55.115 "qid": 0, 00:17:55.115 "state": "enabled", 00:17:55.115 "listen_address": { 00:17:55.115 "trtype": "TCP", 00:17:55.115 "adrfam": "IPv4", 00:17:55.115 "traddr": "10.0.0.2", 00:17:55.115 "trsvcid": "4420" 00:17:55.115 }, 00:17:55.115 "peer_address": { 00:17:55.115 "trtype": "TCP", 00:17:55.115 "adrfam": "IPv4", 00:17:55.115 "traddr": "10.0.0.1", 00:17:55.115 "trsvcid": "37760" 00:17:55.116 }, 00:17:55.116 "auth": { 00:17:55.116 "state": "completed", 00:17:55.116 "digest": "sha256", 00:17:55.116 "dhgroup": "ffdhe4096" 00:17:55.116 } 00:17:55.116 } 00:17:55.116 ]' 00:17:55.116 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.116 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.116 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.376 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:55.376 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.376 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.376 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.376 09:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.637 09:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:17:56.207 09:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.207 09:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:56.207 09:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.207 09:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.207 09:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.207 09:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.207 09:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:56.207 09:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:56.468 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:56.468 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.468 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.468 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:56.468 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:56.468 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.468 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.468 09:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.468 09:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.468 09:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.468 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.468 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.729 00:17:56.729 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.729 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.729 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.989 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.989 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.989 09:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.989 09:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.989 09:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.989 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.989 { 00:17:56.989 "cntlid": 29, 00:17:56.989 "qid": 0, 00:17:56.989 "state": "enabled", 00:17:56.989 "listen_address": { 00:17:56.989 "trtype": "TCP", 00:17:56.989 "adrfam": "IPv4", 00:17:56.989 "traddr": "10.0.0.2", 00:17:56.989 "trsvcid": "4420" 00:17:56.989 }, 00:17:56.989 "peer_address": { 00:17:56.989 "trtype": "TCP", 00:17:56.989 "adrfam": "IPv4", 00:17:56.989 "traddr": "10.0.0.1", 00:17:56.989 "trsvcid": "37796" 00:17:56.989 }, 00:17:56.989 "auth": { 00:17:56.989 "state": "completed", 00:17:56.989 "digest": "sha256", 00:17:56.989 "dhgroup": "ffdhe4096" 00:17:56.989 } 00:17:56.989 } 00:17:56.989 ]' 00:17:56.989 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.989 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.989 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.989 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.989 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.251 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.251 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.251 09:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.251 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.193 09:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.765 00:17:58.765 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.765 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.765 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.765 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.765 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.765 09:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.765 09:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.765 09:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.765 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.765 { 00:17:58.765 "cntlid": 31, 00:17:58.765 "qid": 0, 00:17:58.765 "state": "enabled", 00:17:58.765 "listen_address": { 00:17:58.765 "trtype": "TCP", 00:17:58.765 "adrfam": "IPv4", 00:17:58.765 "traddr": "10.0.0.2", 00:17:58.765 "trsvcid": "4420" 00:17:58.765 }, 00:17:58.765 "peer_address": { 00:17:58.765 "trtype": "TCP", 00:17:58.765 "adrfam": "IPv4", 00:17:58.765 "traddr": "10.0.0.1", 00:17:58.765 "trsvcid": "37836" 00:17:58.765 }, 00:17:58.765 "auth": { 00:17:58.765 "state": "completed", 00:17:58.765 "digest": "sha256", 00:17:58.765 "dhgroup": "ffdhe4096" 00:17:58.765 } 00:17:58.765 } 00:17:58.765 ]' 00:17:58.765 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.765 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.765 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.026 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.026 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.026 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.026 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.026 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.287 09:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:17:59.858 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.858 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.858 09:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.858 09:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.858 09:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.858 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.858 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.858 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:59.858 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:00.119 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:00.119 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.119 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.119 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:00.119 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:00.119 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.119 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.119 09:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.119 09:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.119 09:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.119 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.119 09:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.380 00:18:00.641 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.641 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.641 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.641 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.641 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.641 09:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.641 09:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.641 09:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.641 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.641 { 00:18:00.641 "cntlid": 33, 00:18:00.641 "qid": 0, 00:18:00.641 "state": "enabled", 00:18:00.641 "listen_address": { 00:18:00.641 "trtype": "TCP", 00:18:00.641 "adrfam": "IPv4", 00:18:00.641 "traddr": "10.0.0.2", 00:18:00.641 "trsvcid": "4420" 00:18:00.641 }, 00:18:00.641 "peer_address": { 00:18:00.641 "trtype": "TCP", 00:18:00.641 "adrfam": "IPv4", 00:18:00.641 "traddr": "10.0.0.1", 00:18:00.641 "trsvcid": "39082" 00:18:00.641 }, 00:18:00.641 "auth": { 00:18:00.641 "state": "completed", 00:18:00.641 "digest": "sha256", 00:18:00.641 "dhgroup": "ffdhe6144" 00:18:00.641 } 00:18:00.641 } 00:18:00.641 ]' 00:18:00.641 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.901 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.901 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.901 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.901 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.901 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.901 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.901 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.161 09:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:18:01.732 09:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.732 09:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.732 09:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.732 09:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.732 09:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.732 09:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.732 09:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:01.732 09:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:01.992 09:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:01.992 09:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.992 09:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.992 09:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:01.992 09:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:01.992 09:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.992 09:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.992 09:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.992 09:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.992 09:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.993 09:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.993 09:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.564 00:18:02.564 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.564 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.564 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.564 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.564 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.564 09:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.564 09:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.564 09:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.564 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.564 { 00:18:02.564 "cntlid": 35, 00:18:02.564 "qid": 0, 00:18:02.564 "state": "enabled", 00:18:02.564 "listen_address": { 00:18:02.564 "trtype": "TCP", 00:18:02.564 "adrfam": "IPv4", 00:18:02.564 "traddr": "10.0.0.2", 00:18:02.564 "trsvcid": "4420" 00:18:02.564 }, 00:18:02.564 "peer_address": { 00:18:02.564 "trtype": "TCP", 00:18:02.564 "adrfam": "IPv4", 00:18:02.564 "traddr": "10.0.0.1", 00:18:02.564 "trsvcid": "39108" 00:18:02.564 }, 00:18:02.564 "auth": { 00:18:02.564 "state": "completed", 00:18:02.564 "digest": "sha256", 00:18:02.564 "dhgroup": "ffdhe6144" 00:18:02.564 } 00:18:02.564 } 00:18:02.564 ]' 00:18:02.564 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.825 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.825 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.825 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:02.825 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.825 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.825 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.825 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.085 09:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:18:03.656 09:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.656 09:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.656 09:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.656 09:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.656 09:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.656 09:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.656 09:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:03.656 09:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:03.916 09:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:03.916 09:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.916 09:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.916 09:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:03.916 09:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:03.916 09:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.916 09:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.916 09:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.916 09:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.916 09:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.916 09:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.916 09:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.487 00:18:04.488 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.488 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.488 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.488 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.488 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.488 09:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:04.488 09:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.488 09:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:04.488 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.488 { 00:18:04.488 "cntlid": 37, 00:18:04.488 "qid": 0, 00:18:04.488 "state": "enabled", 00:18:04.488 "listen_address": { 00:18:04.488 "trtype": "TCP", 00:18:04.488 "adrfam": "IPv4", 00:18:04.488 "traddr": "10.0.0.2", 00:18:04.488 "trsvcid": "4420" 00:18:04.488 }, 00:18:04.488 "peer_address": { 00:18:04.488 "trtype": "TCP", 00:18:04.488 "adrfam": "IPv4", 00:18:04.488 "traddr": "10.0.0.1", 00:18:04.488 "trsvcid": "39122" 00:18:04.488 }, 00:18:04.488 "auth": { 00:18:04.488 "state": "completed", 00:18:04.488 "digest": "sha256", 00:18:04.488 "dhgroup": "ffdhe6144" 00:18:04.488 } 00:18:04.488 } 00:18:04.488 ]' 00:18:04.488 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.748 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.748 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.748 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.748 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.748 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.748 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.748 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.748 09:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.690 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.296 00:18:06.296 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.296 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.296 09:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.560 09:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.560 09:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.560 09:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.560 09:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.560 09:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.560 09:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.560 { 00:18:06.560 "cntlid": 39, 00:18:06.560 "qid": 0, 00:18:06.560 "state": "enabled", 00:18:06.560 "listen_address": { 00:18:06.560 "trtype": "TCP", 00:18:06.560 "adrfam": "IPv4", 00:18:06.560 "traddr": "10.0.0.2", 00:18:06.560 "trsvcid": "4420" 00:18:06.560 }, 00:18:06.560 "peer_address": { 00:18:06.560 "trtype": "TCP", 00:18:06.560 "adrfam": "IPv4", 00:18:06.560 "traddr": "10.0.0.1", 00:18:06.560 "trsvcid": "39152" 00:18:06.560 }, 00:18:06.560 "auth": { 00:18:06.560 "state": "completed", 00:18:06.560 "digest": "sha256", 00:18:06.560 "dhgroup": "ffdhe6144" 00:18:06.560 } 00:18:06.560 } 00:18:06.560 ]' 00:18:06.560 09:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.560 09:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.560 09:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.560 09:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.560 09:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.560 09:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.560 09:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.560 09:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.822 09:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:18:07.394 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.394 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.394 09:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.394 09:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.394 09:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.394 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.394 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.394 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:07.394 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:07.655 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:07.655 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.655 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.655 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:07.655 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:07.655 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.655 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.655 09:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.655 09:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.655 09:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.655 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.655 09:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.226 00:18:08.226 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.226 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.226 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.487 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.487 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.487 09:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.487 09:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.487 09:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.487 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.487 { 00:18:08.487 "cntlid": 41, 00:18:08.487 "qid": 0, 00:18:08.487 "state": "enabled", 00:18:08.487 "listen_address": { 00:18:08.487 "trtype": "TCP", 00:18:08.487 "adrfam": "IPv4", 00:18:08.487 "traddr": "10.0.0.2", 00:18:08.487 "trsvcid": "4420" 00:18:08.487 }, 00:18:08.487 "peer_address": { 00:18:08.487 "trtype": "TCP", 00:18:08.487 "adrfam": "IPv4", 00:18:08.487 "traddr": "10.0.0.1", 00:18:08.487 "trsvcid": "39178" 00:18:08.487 }, 00:18:08.487 "auth": { 00:18:08.487 "state": "completed", 00:18:08.487 "digest": "sha256", 00:18:08.487 "dhgroup": "ffdhe8192" 00:18:08.487 } 00:18:08.487 } 00:18:08.487 ]' 00:18:08.487 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.487 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.487 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.748 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.748 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.748 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.748 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.748 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.009 09:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:18:09.580 09:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.580 09:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.580 09:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.580 09:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.580 09:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.580 09:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.580 09:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:09.580 09:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:09.841 09:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:09.841 09:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.841 09:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.841 09:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:09.841 09:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.841 09:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.841 09:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.841 09:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.841 09:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.841 09:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.841 09:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.841 09:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.413 00:18:10.413 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.413 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.413 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.673 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.673 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.673 09:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.673 09:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.673 09:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.673 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.673 { 00:18:10.673 "cntlid": 43, 00:18:10.673 "qid": 0, 00:18:10.673 "state": "enabled", 00:18:10.673 "listen_address": { 00:18:10.673 "trtype": "TCP", 00:18:10.673 "adrfam": "IPv4", 00:18:10.673 "traddr": "10.0.0.2", 00:18:10.674 "trsvcid": "4420" 00:18:10.674 }, 00:18:10.674 "peer_address": { 00:18:10.674 "trtype": "TCP", 00:18:10.674 "adrfam": "IPv4", 00:18:10.674 "traddr": "10.0.0.1", 00:18:10.674 "trsvcid": "46648" 00:18:10.674 }, 00:18:10.674 "auth": { 00:18:10.674 "state": "completed", 00:18:10.674 "digest": "sha256", 00:18:10.674 "dhgroup": "ffdhe8192" 00:18:10.674 } 00:18:10.674 } 00:18:10.674 ]' 00:18:10.674 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.674 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.674 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.674 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.674 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.934 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.934 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.934 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.934 09:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.877 09:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.447 00:18:12.707 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.707 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.707 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.707 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.707 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.707 09:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.707 09:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.707 09:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.707 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.707 { 00:18:12.707 "cntlid": 45, 00:18:12.707 "qid": 0, 00:18:12.707 "state": "enabled", 00:18:12.707 "listen_address": { 00:18:12.707 "trtype": "TCP", 00:18:12.707 "adrfam": "IPv4", 00:18:12.707 "traddr": "10.0.0.2", 00:18:12.707 "trsvcid": "4420" 00:18:12.707 }, 00:18:12.707 "peer_address": { 00:18:12.707 "trtype": "TCP", 00:18:12.707 "adrfam": "IPv4", 00:18:12.707 "traddr": "10.0.0.1", 00:18:12.707 "trsvcid": "46672" 00:18:12.707 }, 00:18:12.707 "auth": { 00:18:12.707 "state": "completed", 00:18:12.707 "digest": "sha256", 00:18:12.707 "dhgroup": "ffdhe8192" 00:18:12.707 } 00:18:12.707 } 00:18:12.707 ]' 00:18:12.707 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.968 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.968 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.968 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.968 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.968 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.968 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.968 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.228 09:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:18:13.800 09:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.800 09:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.800 09:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.800 09:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.800 09:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.800 09:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.800 09:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:13.800 09:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:14.060 09:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:14.060 09:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.060 09:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.060 09:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:14.060 09:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:14.060 09:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.060 09:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:14.060 09:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.060 09:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.060 09:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.060 09:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.061 09:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.631 00:18:14.631 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.631 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.631 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.892 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.892 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.892 09:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.892 09:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.892 09:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.892 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.892 { 00:18:14.892 "cntlid": 47, 00:18:14.892 "qid": 0, 00:18:14.892 "state": "enabled", 00:18:14.892 "listen_address": { 00:18:14.892 "trtype": "TCP", 00:18:14.892 "adrfam": "IPv4", 00:18:14.892 "traddr": "10.0.0.2", 00:18:14.892 "trsvcid": "4420" 00:18:14.892 }, 00:18:14.892 "peer_address": { 00:18:14.892 "trtype": "TCP", 00:18:14.892 "adrfam": "IPv4", 00:18:14.892 "traddr": "10.0.0.1", 00:18:14.892 "trsvcid": "46708" 00:18:14.892 }, 00:18:14.892 "auth": { 00:18:14.892 "state": "completed", 00:18:14.892 "digest": "sha256", 00:18:14.892 "dhgroup": "ffdhe8192" 00:18:14.892 } 00:18:14.892 } 00:18:14.892 ]' 00:18:14.892 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.892 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.892 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.892 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.892 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.152 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.152 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.152 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.152 09:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.094 09:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.355 00:18:16.355 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.355 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.355 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.616 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.616 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.616 09:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.616 09:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.616 09:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.616 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.616 { 00:18:16.616 "cntlid": 49, 00:18:16.616 "qid": 0, 00:18:16.616 "state": "enabled", 00:18:16.616 "listen_address": { 00:18:16.616 "trtype": "TCP", 00:18:16.616 "adrfam": "IPv4", 00:18:16.616 "traddr": "10.0.0.2", 00:18:16.616 "trsvcid": "4420" 00:18:16.616 }, 00:18:16.616 "peer_address": { 00:18:16.616 "trtype": "TCP", 00:18:16.616 "adrfam": "IPv4", 00:18:16.616 "traddr": "10.0.0.1", 00:18:16.616 "trsvcid": "46736" 00:18:16.616 }, 00:18:16.616 "auth": { 00:18:16.616 "state": "completed", 00:18:16.616 "digest": "sha384", 00:18:16.616 "dhgroup": "null" 00:18:16.616 } 00:18:16.616 } 00:18:16.616 ]' 00:18:16.616 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.616 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.616 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.616 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:16.877 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.877 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.877 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.877 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.877 09:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.819 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.080 00:18:18.080 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.080 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.080 09:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.340 09:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.340 09:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.340 09:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.340 09:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.340 09:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.340 09:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.340 { 00:18:18.340 "cntlid": 51, 00:18:18.340 "qid": 0, 00:18:18.340 "state": "enabled", 00:18:18.340 "listen_address": { 00:18:18.340 "trtype": "TCP", 00:18:18.340 "adrfam": "IPv4", 00:18:18.340 "traddr": "10.0.0.2", 00:18:18.340 "trsvcid": "4420" 00:18:18.340 }, 00:18:18.340 "peer_address": { 00:18:18.340 "trtype": "TCP", 00:18:18.340 "adrfam": "IPv4", 00:18:18.340 "traddr": "10.0.0.1", 00:18:18.340 "trsvcid": "46768" 00:18:18.340 }, 00:18:18.340 "auth": { 00:18:18.340 "state": "completed", 00:18:18.340 "digest": "sha384", 00:18:18.340 "dhgroup": "null" 00:18:18.340 } 00:18:18.340 } 00:18:18.340 ]' 00:18:18.340 09:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.601 09:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.601 09:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.601 09:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:18.601 09:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.601 09:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.601 09:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.601 09:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.861 09:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:18:19.432 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.432 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.432 09:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.432 09:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.432 09:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.432 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.432 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:19.432 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:19.693 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:19.693 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.693 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:19.693 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:19.693 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:19.693 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.693 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.693 09:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.693 09:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.693 09:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.693 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.694 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.955 00:18:19.955 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.955 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.955 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.217 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.217 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.217 09:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.217 09:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.217 09:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:20.217 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.217 { 00:18:20.217 "cntlid": 53, 00:18:20.217 "qid": 0, 00:18:20.217 "state": "enabled", 00:18:20.217 "listen_address": { 00:18:20.217 "trtype": "TCP", 00:18:20.217 "adrfam": "IPv4", 00:18:20.217 "traddr": "10.0.0.2", 00:18:20.217 "trsvcid": "4420" 00:18:20.217 }, 00:18:20.217 "peer_address": { 00:18:20.217 "trtype": "TCP", 00:18:20.217 "adrfam": "IPv4", 00:18:20.217 "traddr": "10.0.0.1", 00:18:20.217 "trsvcid": "40256" 00:18:20.217 }, 00:18:20.217 "auth": { 00:18:20.217 "state": "completed", 00:18:20.217 "digest": "sha384", 00:18:20.217 "dhgroup": "null" 00:18:20.217 } 00:18:20.217 } 00:18:20.217 ]' 00:18:20.217 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.217 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.217 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.217 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:20.217 09:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.217 09:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.217 09:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.217 09:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.477 09:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:18:21.418 09:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.418 09:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.418 09:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.418 09:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.418 09:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.418 09:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.418 09:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:21.418 09:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:21.418 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:21.418 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.418 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.418 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:21.418 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:21.418 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.418 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:21.418 09:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.418 09:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.418 09:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.418 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.418 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.679 00:18:21.679 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.679 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.679 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.940 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.940 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.940 09:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.940 09:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.940 09:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.940 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.940 { 00:18:21.940 "cntlid": 55, 00:18:21.940 "qid": 0, 00:18:21.940 "state": "enabled", 00:18:21.940 "listen_address": { 00:18:21.940 "trtype": "TCP", 00:18:21.940 "adrfam": "IPv4", 00:18:21.940 "traddr": "10.0.0.2", 00:18:21.940 "trsvcid": "4420" 00:18:21.940 }, 00:18:21.940 "peer_address": { 00:18:21.940 "trtype": "TCP", 00:18:21.940 "adrfam": "IPv4", 00:18:21.940 "traddr": "10.0.0.1", 00:18:21.940 "trsvcid": "40278" 00:18:21.940 }, 00:18:21.940 "auth": { 00:18:21.940 "state": "completed", 00:18:21.940 "digest": "sha384", 00:18:21.940 "dhgroup": "null" 00:18:21.940 } 00:18:21.940 } 00:18:21.940 ]' 00:18:21.940 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.940 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.940 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.940 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:21.940 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.201 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.201 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.201 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.201 09:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.145 09:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.146 09:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.146 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.146 09:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.407 00:18:23.407 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.407 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.407 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.678 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.679 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.679 09:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.679 09:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.679 09:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.679 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.679 { 00:18:23.679 "cntlid": 57, 00:18:23.679 "qid": 0, 00:18:23.679 "state": "enabled", 00:18:23.679 "listen_address": { 00:18:23.679 "trtype": "TCP", 00:18:23.679 "adrfam": "IPv4", 00:18:23.679 "traddr": "10.0.0.2", 00:18:23.679 "trsvcid": "4420" 00:18:23.679 }, 00:18:23.679 "peer_address": { 00:18:23.679 "trtype": "TCP", 00:18:23.679 "adrfam": "IPv4", 00:18:23.679 "traddr": "10.0.0.1", 00:18:23.679 "trsvcid": "40302" 00:18:23.679 }, 00:18:23.679 "auth": { 00:18:23.679 "state": "completed", 00:18:23.679 "digest": "sha384", 00:18:23.679 "dhgroup": "ffdhe2048" 00:18:23.679 } 00:18:23.679 } 00:18:23.679 ]' 00:18:23.679 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.679 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.679 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.679 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:23.679 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.968 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.968 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.968 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.968 09:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:18:24.911 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.912 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.173 00:18:25.173 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.173 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.173 09:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.434 09:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.434 09:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.434 09:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.434 09:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.434 09:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.434 09:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.434 { 00:18:25.434 "cntlid": 59, 00:18:25.434 "qid": 0, 00:18:25.434 "state": "enabled", 00:18:25.434 "listen_address": { 00:18:25.434 "trtype": "TCP", 00:18:25.434 "adrfam": "IPv4", 00:18:25.434 "traddr": "10.0.0.2", 00:18:25.434 "trsvcid": "4420" 00:18:25.434 }, 00:18:25.434 "peer_address": { 00:18:25.434 "trtype": "TCP", 00:18:25.434 "adrfam": "IPv4", 00:18:25.434 "traddr": "10.0.0.1", 00:18:25.434 "trsvcid": "40320" 00:18:25.434 }, 00:18:25.434 "auth": { 00:18:25.434 "state": "completed", 00:18:25.434 "digest": "sha384", 00:18:25.434 "dhgroup": "ffdhe2048" 00:18:25.434 } 00:18:25.434 } 00:18:25.434 ]' 00:18:25.434 09:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.434 09:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.434 09:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.694 09:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:25.694 09:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.694 09:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.694 09:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.694 09:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.694 09:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.637 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.898 00:18:26.898 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.898 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.898 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.158 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.158 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.158 09:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.158 09:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.158 09:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.158 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.158 { 00:18:27.158 "cntlid": 61, 00:18:27.158 "qid": 0, 00:18:27.158 "state": "enabled", 00:18:27.158 "listen_address": { 00:18:27.158 "trtype": "TCP", 00:18:27.158 "adrfam": "IPv4", 00:18:27.158 "traddr": "10.0.0.2", 00:18:27.158 "trsvcid": "4420" 00:18:27.158 }, 00:18:27.158 "peer_address": { 00:18:27.158 "trtype": "TCP", 00:18:27.158 "adrfam": "IPv4", 00:18:27.158 "traddr": "10.0.0.1", 00:18:27.158 "trsvcid": "40356" 00:18:27.158 }, 00:18:27.158 "auth": { 00:18:27.158 "state": "completed", 00:18:27.158 "digest": "sha384", 00:18:27.158 "dhgroup": "ffdhe2048" 00:18:27.158 } 00:18:27.158 } 00:18:27.158 ]' 00:18:27.158 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.158 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.158 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.418 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:27.418 09:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.418 09:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.418 09:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.418 09:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.679 09:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:18:28.250 09:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.250 09:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.250 09:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.250 09:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.250 09:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.250 09:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.250 09:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:28.250 09:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:28.511 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:28.511 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.511 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.511 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:28.511 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:28.511 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.511 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:28.511 09:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.511 09:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.511 09:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.511 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.511 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.772 00:18:28.772 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.772 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.772 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.033 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.033 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.033 09:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.033 09:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.033 09:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.033 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.033 { 00:18:29.033 "cntlid": 63, 00:18:29.033 "qid": 0, 00:18:29.033 "state": "enabled", 00:18:29.033 "listen_address": { 00:18:29.033 "trtype": "TCP", 00:18:29.033 "adrfam": "IPv4", 00:18:29.033 "traddr": "10.0.0.2", 00:18:29.033 "trsvcid": "4420" 00:18:29.033 }, 00:18:29.033 "peer_address": { 00:18:29.033 "trtype": "TCP", 00:18:29.033 "adrfam": "IPv4", 00:18:29.033 "traddr": "10.0.0.1", 00:18:29.033 "trsvcid": "46666" 00:18:29.033 }, 00:18:29.033 "auth": { 00:18:29.033 "state": "completed", 00:18:29.033 "digest": "sha384", 00:18:29.033 "dhgroup": "ffdhe2048" 00:18:29.033 } 00:18:29.033 } 00:18:29.033 ]' 00:18:29.033 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.033 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.033 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.033 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.033 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.033 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.033 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.033 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.293 09:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:30.236 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.237 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.237 09:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.237 09:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.237 09:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.237 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.237 09:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.504 00:18:30.504 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.504 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.504 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.766 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.766 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.766 09:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.766 09:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.766 09:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.766 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.766 { 00:18:30.766 "cntlid": 65, 00:18:30.766 "qid": 0, 00:18:30.766 "state": "enabled", 00:18:30.766 "listen_address": { 00:18:30.766 "trtype": "TCP", 00:18:30.766 "adrfam": "IPv4", 00:18:30.766 "traddr": "10.0.0.2", 00:18:30.766 "trsvcid": "4420" 00:18:30.766 }, 00:18:30.766 "peer_address": { 00:18:30.766 "trtype": "TCP", 00:18:30.766 "adrfam": "IPv4", 00:18:30.766 "traddr": "10.0.0.1", 00:18:30.766 "trsvcid": "46700" 00:18:30.766 }, 00:18:30.766 "auth": { 00:18:30.766 "state": "completed", 00:18:30.766 "digest": "sha384", 00:18:30.766 "dhgroup": "ffdhe3072" 00:18:30.766 } 00:18:30.767 } 00:18:30.767 ]' 00:18:30.767 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.767 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.767 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.767 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:30.767 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.767 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.767 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.767 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.028 09:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.968 09:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.229 00:18:32.229 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.229 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.229 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.490 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.490 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.490 09:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.490 09:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.490 09:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.490 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.490 { 00:18:32.490 "cntlid": 67, 00:18:32.490 "qid": 0, 00:18:32.490 "state": "enabled", 00:18:32.490 "listen_address": { 00:18:32.490 "trtype": "TCP", 00:18:32.490 "adrfam": "IPv4", 00:18:32.490 "traddr": "10.0.0.2", 00:18:32.490 "trsvcid": "4420" 00:18:32.490 }, 00:18:32.490 "peer_address": { 00:18:32.490 "trtype": "TCP", 00:18:32.490 "adrfam": "IPv4", 00:18:32.490 "traddr": "10.0.0.1", 00:18:32.490 "trsvcid": "46712" 00:18:32.490 }, 00:18:32.490 "auth": { 00:18:32.490 "state": "completed", 00:18:32.490 "digest": "sha384", 00:18:32.490 "dhgroup": "ffdhe3072" 00:18:32.490 } 00:18:32.490 } 00:18:32.490 ]' 00:18:32.490 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.490 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.490 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.750 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:32.750 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.750 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.750 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.751 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.011 09:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:18:33.581 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.581 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.581 09:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.581 09:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.581 09:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.581 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.581 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.581 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.842 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:33.842 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.842 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.842 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:33.842 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:33.842 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.842 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.842 09:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.842 09:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.842 09:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.842 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.842 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.102 00:18:34.102 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.102 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.102 09:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.363 09:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.363 09:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.363 09:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.363 09:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.363 09:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.363 09:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.363 { 00:18:34.363 "cntlid": 69, 00:18:34.363 "qid": 0, 00:18:34.363 "state": "enabled", 00:18:34.363 "listen_address": { 00:18:34.363 "trtype": "TCP", 00:18:34.363 "adrfam": "IPv4", 00:18:34.363 "traddr": "10.0.0.2", 00:18:34.363 "trsvcid": "4420" 00:18:34.363 }, 00:18:34.363 "peer_address": { 00:18:34.363 "trtype": "TCP", 00:18:34.363 "adrfam": "IPv4", 00:18:34.363 "traddr": "10.0.0.1", 00:18:34.363 "trsvcid": "46750" 00:18:34.363 }, 00:18:34.363 "auth": { 00:18:34.363 "state": "completed", 00:18:34.363 "digest": "sha384", 00:18:34.363 "dhgroup": "ffdhe3072" 00:18:34.363 } 00:18:34.363 } 00:18:34.363 ]' 00:18:34.363 09:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.363 09:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.363 09:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.363 09:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:34.363 09:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.363 09:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.363 09:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.363 09:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.623 09:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.566 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.827 00:18:35.827 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.827 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.827 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.089 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.089 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.089 09:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.089 09:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.089 09:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.089 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.089 { 00:18:36.089 "cntlid": 71, 00:18:36.089 "qid": 0, 00:18:36.089 "state": "enabled", 00:18:36.089 "listen_address": { 00:18:36.089 "trtype": "TCP", 00:18:36.089 "adrfam": "IPv4", 00:18:36.089 "traddr": "10.0.0.2", 00:18:36.089 "trsvcid": "4420" 00:18:36.089 }, 00:18:36.089 "peer_address": { 00:18:36.089 "trtype": "TCP", 00:18:36.089 "adrfam": "IPv4", 00:18:36.089 "traddr": "10.0.0.1", 00:18:36.089 "trsvcid": "46772" 00:18:36.089 }, 00:18:36.089 "auth": { 00:18:36.089 "state": "completed", 00:18:36.089 "digest": "sha384", 00:18:36.089 "dhgroup": "ffdhe3072" 00:18:36.089 } 00:18:36.089 } 00:18:36.089 ]' 00:18:36.089 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.089 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.089 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.089 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.089 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.349 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.349 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.349 09:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.349 09:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:18:37.292 09:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.292 09:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.292 09:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.292 09:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.292 09:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.292 09:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.292 09:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.292 09:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.292 09:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.292 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:37.292 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.292 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.292 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:37.292 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.292 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.292 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.292 09:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.292 09:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.292 09:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.292 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.292 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.553 00:18:37.816 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.816 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.816 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.816 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.816 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.816 09:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.816 09:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.816 09:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.816 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.816 { 00:18:37.816 "cntlid": 73, 00:18:37.816 "qid": 0, 00:18:37.816 "state": "enabled", 00:18:37.816 "listen_address": { 00:18:37.816 "trtype": "TCP", 00:18:37.816 "adrfam": "IPv4", 00:18:37.816 "traddr": "10.0.0.2", 00:18:37.816 "trsvcid": "4420" 00:18:37.816 }, 00:18:37.816 "peer_address": { 00:18:37.816 "trtype": "TCP", 00:18:37.816 "adrfam": "IPv4", 00:18:37.816 "traddr": "10.0.0.1", 00:18:37.816 "trsvcid": "46800" 00:18:37.816 }, 00:18:37.816 "auth": { 00:18:37.816 "state": "completed", 00:18:37.816 "digest": "sha384", 00:18:37.816 "dhgroup": "ffdhe4096" 00:18:37.816 } 00:18:37.816 } 00:18:37.816 ]' 00:18:37.816 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.077 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.077 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.077 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:38.077 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.077 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.077 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.077 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.337 09:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:18:38.910 09:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.910 09:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.910 09:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.910 09:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.910 09:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.910 09:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.910 09:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.910 09:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.171 09:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:39.171 09:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.171 09:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.171 09:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:39.171 09:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:39.171 09:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.171 09:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.171 09:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.171 09:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.171 09:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.171 09:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.171 09:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.432 00:18:39.432 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.432 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.432 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.693 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.694 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.694 09:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.694 09:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.694 09:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.694 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.694 { 00:18:39.694 "cntlid": 75, 00:18:39.694 "qid": 0, 00:18:39.694 "state": "enabled", 00:18:39.694 "listen_address": { 00:18:39.694 "trtype": "TCP", 00:18:39.694 "adrfam": "IPv4", 00:18:39.694 "traddr": "10.0.0.2", 00:18:39.694 "trsvcid": "4420" 00:18:39.694 }, 00:18:39.694 "peer_address": { 00:18:39.694 "trtype": "TCP", 00:18:39.694 "adrfam": "IPv4", 00:18:39.694 "traddr": "10.0.0.1", 00:18:39.694 "trsvcid": "39998" 00:18:39.694 }, 00:18:39.694 "auth": { 00:18:39.694 "state": "completed", 00:18:39.694 "digest": "sha384", 00:18:39.694 "dhgroup": "ffdhe4096" 00:18:39.694 } 00:18:39.694 } 00:18:39.694 ]' 00:18:39.694 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.694 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.694 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.694 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:39.694 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.954 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.954 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.954 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.954 09:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.896 09:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.468 00:18:41.468 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.468 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.468 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.468 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.468 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.468 09:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.468 09:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.468 09:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.468 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.468 { 00:18:41.468 "cntlid": 77, 00:18:41.468 "qid": 0, 00:18:41.468 "state": "enabled", 00:18:41.468 "listen_address": { 00:18:41.468 "trtype": "TCP", 00:18:41.468 "adrfam": "IPv4", 00:18:41.468 "traddr": "10.0.0.2", 00:18:41.468 "trsvcid": "4420" 00:18:41.468 }, 00:18:41.468 "peer_address": { 00:18:41.468 "trtype": "TCP", 00:18:41.468 "adrfam": "IPv4", 00:18:41.468 "traddr": "10.0.0.1", 00:18:41.468 "trsvcid": "40028" 00:18:41.468 }, 00:18:41.468 "auth": { 00:18:41.468 "state": "completed", 00:18:41.468 "digest": "sha384", 00:18:41.468 "dhgroup": "ffdhe4096" 00:18:41.468 } 00:18:41.468 } 00:18:41.468 ]' 00:18:41.468 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.468 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.468 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.728 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:41.728 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.728 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.728 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.728 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.989 09:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:18:42.560 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.560 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.560 09:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.560 09:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.560 09:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.560 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.560 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.560 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.821 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:42.821 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.821 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.821 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:42.821 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.821 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.821 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:42.821 09:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.821 09:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.821 09:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.821 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.821 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.082 00:18:43.082 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.082 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.082 09:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.342 09:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.342 09:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.342 09:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.342 09:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.342 09:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.342 09:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.342 { 00:18:43.342 "cntlid": 79, 00:18:43.342 "qid": 0, 00:18:43.342 "state": "enabled", 00:18:43.342 "listen_address": { 00:18:43.342 "trtype": "TCP", 00:18:43.342 "adrfam": "IPv4", 00:18:43.342 "traddr": "10.0.0.2", 00:18:43.342 "trsvcid": "4420" 00:18:43.342 }, 00:18:43.342 "peer_address": { 00:18:43.342 "trtype": "TCP", 00:18:43.342 "adrfam": "IPv4", 00:18:43.342 "traddr": "10.0.0.1", 00:18:43.342 "trsvcid": "40050" 00:18:43.342 }, 00:18:43.342 "auth": { 00:18:43.342 "state": "completed", 00:18:43.342 "digest": "sha384", 00:18:43.342 "dhgroup": "ffdhe4096" 00:18:43.342 } 00:18:43.342 } 00:18:43.342 ]' 00:18:43.342 09:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.342 09:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.342 09:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.342 09:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:43.342 09:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.603 09:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.603 09:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.603 09:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.603 09:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:18:44.544 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.544 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.544 09:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.544 09:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.544 09:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.544 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.544 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.544 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.544 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.545 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:44.545 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.545 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.545 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:44.545 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.545 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.545 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.545 09:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.545 09:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.545 09:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.545 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.545 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.116 00:18:45.116 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.116 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.116 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.377 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.377 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.377 09:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.377 09:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.377 09:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.377 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.377 { 00:18:45.377 "cntlid": 81, 00:18:45.377 "qid": 0, 00:18:45.377 "state": "enabled", 00:18:45.377 "listen_address": { 00:18:45.377 "trtype": "TCP", 00:18:45.377 "adrfam": "IPv4", 00:18:45.377 "traddr": "10.0.0.2", 00:18:45.377 "trsvcid": "4420" 00:18:45.377 }, 00:18:45.377 "peer_address": { 00:18:45.377 "trtype": "TCP", 00:18:45.377 "adrfam": "IPv4", 00:18:45.377 "traddr": "10.0.0.1", 00:18:45.377 "trsvcid": "40082" 00:18:45.377 }, 00:18:45.377 "auth": { 00:18:45.377 "state": "completed", 00:18:45.377 "digest": "sha384", 00:18:45.377 "dhgroup": "ffdhe6144" 00:18:45.377 } 00:18:45.377 } 00:18:45.377 ]' 00:18:45.377 09:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.377 09:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.377 09:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.377 09:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:45.377 09:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.377 09:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.377 09:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.377 09:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.637 09:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.580 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.151 00:18:47.152 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.152 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.152 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.152 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.152 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.152 09:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.152 09:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.152 09:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.152 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.152 { 00:18:47.152 "cntlid": 83, 00:18:47.152 "qid": 0, 00:18:47.152 "state": "enabled", 00:18:47.152 "listen_address": { 00:18:47.152 "trtype": "TCP", 00:18:47.152 "adrfam": "IPv4", 00:18:47.152 "traddr": "10.0.0.2", 00:18:47.152 "trsvcid": "4420" 00:18:47.152 }, 00:18:47.152 "peer_address": { 00:18:47.152 "trtype": "TCP", 00:18:47.152 "adrfam": "IPv4", 00:18:47.152 "traddr": "10.0.0.1", 00:18:47.152 "trsvcid": "40108" 00:18:47.152 }, 00:18:47.152 "auth": { 00:18:47.152 "state": "completed", 00:18:47.152 "digest": "sha384", 00:18:47.152 "dhgroup": "ffdhe6144" 00:18:47.152 } 00:18:47.152 } 00:18:47.152 ]' 00:18:47.152 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.412 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.412 09:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.412 09:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:47.412 09:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.412 09:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.412 09:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.412 09:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.673 09:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:18:48.245 09:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.245 09:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.245 09:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.245 09:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.245 09:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.245 09:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.245 09:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.245 09:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.505 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:48.505 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.505 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.505 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:48.505 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:48.505 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.506 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.506 09:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.506 09:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.506 09:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.506 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.506 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.077 00:18:49.077 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.077 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.077 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.077 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.077 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.077 09:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.077 09:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.077 09:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.077 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.077 { 00:18:49.077 "cntlid": 85, 00:18:49.077 "qid": 0, 00:18:49.077 "state": "enabled", 00:18:49.077 "listen_address": { 00:18:49.077 "trtype": "TCP", 00:18:49.077 "adrfam": "IPv4", 00:18:49.077 "traddr": "10.0.0.2", 00:18:49.077 "trsvcid": "4420" 00:18:49.077 }, 00:18:49.077 "peer_address": { 00:18:49.077 "trtype": "TCP", 00:18:49.077 "adrfam": "IPv4", 00:18:49.077 "traddr": "10.0.0.1", 00:18:49.077 "trsvcid": "50624" 00:18:49.077 }, 00:18:49.077 "auth": { 00:18:49.077 "state": "completed", 00:18:49.077 "digest": "sha384", 00:18:49.077 "dhgroup": "ffdhe6144" 00:18:49.077 } 00:18:49.077 } 00:18:49.077 ]' 00:18:49.077 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.337 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.337 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.337 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:49.337 09:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.337 09:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.337 09:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.337 09:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.596 09:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:18:50.166 09:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.166 09:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.166 09:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.166 09:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.166 09:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.166 09:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.166 09:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:50.166 09:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:50.426 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:50.426 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.426 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.426 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:50.426 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:50.426 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.426 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:50.426 09:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.426 09:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.426 09:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.426 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.426 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.996 00:18:50.996 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.996 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.996 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.996 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.996 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.996 09:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.996 09:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.996 09:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.996 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.996 { 00:18:50.996 "cntlid": 87, 00:18:50.996 "qid": 0, 00:18:50.996 "state": "enabled", 00:18:50.996 "listen_address": { 00:18:50.996 "trtype": "TCP", 00:18:50.996 "adrfam": "IPv4", 00:18:50.996 "traddr": "10.0.0.2", 00:18:50.996 "trsvcid": "4420" 00:18:50.996 }, 00:18:50.996 "peer_address": { 00:18:50.996 "trtype": "TCP", 00:18:50.996 "adrfam": "IPv4", 00:18:50.996 "traddr": "10.0.0.1", 00:18:50.996 "trsvcid": "50634" 00:18:50.996 }, 00:18:50.996 "auth": { 00:18:50.996 "state": "completed", 00:18:50.996 "digest": "sha384", 00:18:50.996 "dhgroup": "ffdhe6144" 00:18:50.996 } 00:18:50.996 } 00:18:50.996 ]' 00:18:50.996 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.996 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.996 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.256 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:51.256 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.256 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.256 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.256 09:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.256 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.196 09:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.136 00:18:53.136 09:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.136 09:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.136 09:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.136 09:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.136 09:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.136 09:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.136 09:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.136 09:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.136 09:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.136 { 00:18:53.136 "cntlid": 89, 00:18:53.136 "qid": 0, 00:18:53.136 "state": "enabled", 00:18:53.136 "listen_address": { 00:18:53.136 "trtype": "TCP", 00:18:53.136 "adrfam": "IPv4", 00:18:53.136 "traddr": "10.0.0.2", 00:18:53.136 "trsvcid": "4420" 00:18:53.136 }, 00:18:53.136 "peer_address": { 00:18:53.136 "trtype": "TCP", 00:18:53.136 "adrfam": "IPv4", 00:18:53.136 "traddr": "10.0.0.1", 00:18:53.136 "trsvcid": "50668" 00:18:53.136 }, 00:18:53.136 "auth": { 00:18:53.136 "state": "completed", 00:18:53.136 "digest": "sha384", 00:18:53.136 "dhgroup": "ffdhe8192" 00:18:53.136 } 00:18:53.136 } 00:18:53.136 ]' 00:18:53.136 09:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.136 09:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.137 09:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.137 09:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:53.137 09:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.427 09:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.427 09:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.427 09:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.428 09:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:18:54.368 09:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.368 09:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.368 09:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.368 09:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.368 09:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.368 09:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.368 09:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.368 09:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.368 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:54.368 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.368 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.368 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:54.368 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:54.368 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.368 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.368 09:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.368 09:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.368 09:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.368 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.368 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.939 00:18:54.939 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.939 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.939 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.199 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.199 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.199 09:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.199 09:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.199 09:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.199 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.199 { 00:18:55.199 "cntlid": 91, 00:18:55.199 "qid": 0, 00:18:55.199 "state": "enabled", 00:18:55.200 "listen_address": { 00:18:55.200 "trtype": "TCP", 00:18:55.200 "adrfam": "IPv4", 00:18:55.200 "traddr": "10.0.0.2", 00:18:55.200 "trsvcid": "4420" 00:18:55.200 }, 00:18:55.200 "peer_address": { 00:18:55.200 "trtype": "TCP", 00:18:55.200 "adrfam": "IPv4", 00:18:55.200 "traddr": "10.0.0.1", 00:18:55.200 "trsvcid": "50692" 00:18:55.200 }, 00:18:55.200 "auth": { 00:18:55.200 "state": "completed", 00:18:55.200 "digest": "sha384", 00:18:55.200 "dhgroup": "ffdhe8192" 00:18:55.200 } 00:18:55.200 } 00:18:55.200 ]' 00:18:55.200 09:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.200 09:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.200 09:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.460 09:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.460 09:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.460 09:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.460 09:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.460 09:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.720 09:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:18:56.289 09:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.289 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.289 09:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.289 09:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.289 09:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.289 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.289 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.289 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.550 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:56.550 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.550 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.550 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:56.550 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.550 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.550 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.550 09:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.550 09:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.550 09:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.550 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.550 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.121 00:18:57.121 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.121 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.121 09:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.382 09:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.382 09:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.382 09:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.382 09:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.382 09:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.382 09:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.382 { 00:18:57.382 "cntlid": 93, 00:18:57.382 "qid": 0, 00:18:57.382 "state": "enabled", 00:18:57.382 "listen_address": { 00:18:57.382 "trtype": "TCP", 00:18:57.382 "adrfam": "IPv4", 00:18:57.382 "traddr": "10.0.0.2", 00:18:57.382 "trsvcid": "4420" 00:18:57.382 }, 00:18:57.382 "peer_address": { 00:18:57.382 "trtype": "TCP", 00:18:57.382 "adrfam": "IPv4", 00:18:57.382 "traddr": "10.0.0.1", 00:18:57.382 "trsvcid": "50718" 00:18:57.382 }, 00:18:57.382 "auth": { 00:18:57.382 "state": "completed", 00:18:57.382 "digest": "sha384", 00:18:57.382 "dhgroup": "ffdhe8192" 00:18:57.382 } 00:18:57.382 } 00:18:57.382 ]' 00:18:57.382 09:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.382 09:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.382 09:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.382 09:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:57.382 09:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.643 09:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.643 09:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.643 09:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.643 09:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.585 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.158 00:18:59.419 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.419 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.419 09:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.419 09:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.419 09:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.419 09:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.419 09:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.419 09:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.419 09:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.419 { 00:18:59.419 "cntlid": 95, 00:18:59.419 "qid": 0, 00:18:59.419 "state": "enabled", 00:18:59.419 "listen_address": { 00:18:59.419 "trtype": "TCP", 00:18:59.419 "adrfam": "IPv4", 00:18:59.419 "traddr": "10.0.0.2", 00:18:59.419 "trsvcid": "4420" 00:18:59.419 }, 00:18:59.419 "peer_address": { 00:18:59.419 "trtype": "TCP", 00:18:59.419 "adrfam": "IPv4", 00:18:59.419 "traddr": "10.0.0.1", 00:18:59.419 "trsvcid": "60312" 00:18:59.419 }, 00:18:59.419 "auth": { 00:18:59.419 "state": "completed", 00:18:59.419 "digest": "sha384", 00:18:59.419 "dhgroup": "ffdhe8192" 00:18:59.419 } 00:18:59.419 } 00:18:59.419 ]' 00:18:59.419 09:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.679 09:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.679 09:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.679 09:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.679 09:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.679 09:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.679 09:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.680 09:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.939 09:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:19:00.515 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.515 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.515 09:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.515 09:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.515 09:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.515 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:00.515 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.515 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.515 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:00.515 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:00.778 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:00.778 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.778 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.778 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:00.778 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.778 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.778 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.778 09:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.778 09:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.778 09:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.778 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.778 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.038 00:19:01.038 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.038 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.038 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.299 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.299 09:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.299 09:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.299 09:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.299 09:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.299 09:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.299 { 00:19:01.299 "cntlid": 97, 00:19:01.299 "qid": 0, 00:19:01.299 "state": "enabled", 00:19:01.299 "listen_address": { 00:19:01.299 "trtype": "TCP", 00:19:01.299 "adrfam": "IPv4", 00:19:01.299 "traddr": "10.0.0.2", 00:19:01.299 "trsvcid": "4420" 00:19:01.299 }, 00:19:01.299 "peer_address": { 00:19:01.299 "trtype": "TCP", 00:19:01.299 "adrfam": "IPv4", 00:19:01.299 "traddr": "10.0.0.1", 00:19:01.299 "trsvcid": "60346" 00:19:01.299 }, 00:19:01.299 "auth": { 00:19:01.299 "state": "completed", 00:19:01.299 "digest": "sha512", 00:19:01.299 "dhgroup": "null" 00:19:01.299 } 00:19:01.299 } 00:19:01.299 ]' 00:19:01.299 09:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.299 09:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.299 09:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.299 09:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:01.299 09:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.560 09:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.560 09:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.560 09:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.560 09:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.500 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.761 00:19:02.761 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.761 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.761 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.021 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.021 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.021 09:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.021 09:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.021 09:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.021 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.021 { 00:19:03.021 "cntlid": 99, 00:19:03.021 "qid": 0, 00:19:03.021 "state": "enabled", 00:19:03.021 "listen_address": { 00:19:03.021 "trtype": "TCP", 00:19:03.021 "adrfam": "IPv4", 00:19:03.021 "traddr": "10.0.0.2", 00:19:03.021 "trsvcid": "4420" 00:19:03.021 }, 00:19:03.021 "peer_address": { 00:19:03.021 "trtype": "TCP", 00:19:03.021 "adrfam": "IPv4", 00:19:03.021 "traddr": "10.0.0.1", 00:19:03.021 "trsvcid": "60380" 00:19:03.021 }, 00:19:03.021 "auth": { 00:19:03.021 "state": "completed", 00:19:03.021 "digest": "sha512", 00:19:03.021 "dhgroup": "null" 00:19:03.021 } 00:19:03.021 } 00:19:03.021 ]' 00:19:03.021 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.021 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.021 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.281 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:03.281 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.281 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.281 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.281 09:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.542 09:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:19:04.113 09:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.113 09:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.113 09:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.113 09:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.113 09:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.113 09:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.113 09:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.113 09:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.374 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:04.374 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.374 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.374 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:04.374 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.374 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.374 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.374 09:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.374 09:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.375 09:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.375 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.375 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.636 00:19:04.636 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.636 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.636 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.896 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.896 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.896 09:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.896 09:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.896 09:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.896 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.896 { 00:19:04.896 "cntlid": 101, 00:19:04.896 "qid": 0, 00:19:04.896 "state": "enabled", 00:19:04.896 "listen_address": { 00:19:04.896 "trtype": "TCP", 00:19:04.896 "adrfam": "IPv4", 00:19:04.896 "traddr": "10.0.0.2", 00:19:04.896 "trsvcid": "4420" 00:19:04.896 }, 00:19:04.896 "peer_address": { 00:19:04.896 "trtype": "TCP", 00:19:04.896 "adrfam": "IPv4", 00:19:04.896 "traddr": "10.0.0.1", 00:19:04.896 "trsvcid": "60406" 00:19:04.896 }, 00:19:04.896 "auth": { 00:19:04.896 "state": "completed", 00:19:04.896 "digest": "sha512", 00:19:04.896 "dhgroup": "null" 00:19:04.896 } 00:19:04.896 } 00:19:04.896 ]' 00:19:04.896 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.896 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.896 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.896 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:04.896 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.896 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.896 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.896 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.157 09:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.101 09:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.361 00:19:06.361 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.361 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.361 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.622 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.622 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.622 09:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.622 09:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.622 09:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.622 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.622 { 00:19:06.622 "cntlid": 103, 00:19:06.622 "qid": 0, 00:19:06.622 "state": "enabled", 00:19:06.622 "listen_address": { 00:19:06.622 "trtype": "TCP", 00:19:06.622 "adrfam": "IPv4", 00:19:06.622 "traddr": "10.0.0.2", 00:19:06.622 "trsvcid": "4420" 00:19:06.622 }, 00:19:06.622 "peer_address": { 00:19:06.622 "trtype": "TCP", 00:19:06.622 "adrfam": "IPv4", 00:19:06.622 "traddr": "10.0.0.1", 00:19:06.622 "trsvcid": "60420" 00:19:06.622 }, 00:19:06.622 "auth": { 00:19:06.622 "state": "completed", 00:19:06.622 "digest": "sha512", 00:19:06.622 "dhgroup": "null" 00:19:06.622 } 00:19:06.622 } 00:19:06.622 ]' 00:19:06.622 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.622 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.622 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.622 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:06.622 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.622 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.622 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.622 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.883 09:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.825 09:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.826 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.826 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.089 00:19:08.089 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.089 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.089 09:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.423 09:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.423 09:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.423 09:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.423 09:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.423 09:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.423 09:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.423 { 00:19:08.423 "cntlid": 105, 00:19:08.423 "qid": 0, 00:19:08.423 "state": "enabled", 00:19:08.423 "listen_address": { 00:19:08.423 "trtype": "TCP", 00:19:08.423 "adrfam": "IPv4", 00:19:08.423 "traddr": "10.0.0.2", 00:19:08.423 "trsvcid": "4420" 00:19:08.423 }, 00:19:08.423 "peer_address": { 00:19:08.423 "trtype": "TCP", 00:19:08.423 "adrfam": "IPv4", 00:19:08.423 "traddr": "10.0.0.1", 00:19:08.423 "trsvcid": "60442" 00:19:08.423 }, 00:19:08.423 "auth": { 00:19:08.423 "state": "completed", 00:19:08.423 "digest": "sha512", 00:19:08.423 "dhgroup": "ffdhe2048" 00:19:08.423 } 00:19:08.423 } 00:19:08.423 ]' 00:19:08.423 09:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.423 09:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.423 09:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.423 09:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:08.424 09:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.424 09:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.424 09:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.424 09:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.684 09:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.626 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.887 00:19:09.887 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.887 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.887 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.148 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.148 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.148 09:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.148 09:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.148 09:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.148 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.148 { 00:19:10.148 "cntlid": 107, 00:19:10.148 "qid": 0, 00:19:10.148 "state": "enabled", 00:19:10.148 "listen_address": { 00:19:10.148 "trtype": "TCP", 00:19:10.148 "adrfam": "IPv4", 00:19:10.148 "traddr": "10.0.0.2", 00:19:10.148 "trsvcid": "4420" 00:19:10.148 }, 00:19:10.148 "peer_address": { 00:19:10.148 "trtype": "TCP", 00:19:10.148 "adrfam": "IPv4", 00:19:10.148 "traddr": "10.0.0.1", 00:19:10.148 "trsvcid": "52120" 00:19:10.148 }, 00:19:10.148 "auth": { 00:19:10.148 "state": "completed", 00:19:10.148 "digest": "sha512", 00:19:10.148 "dhgroup": "ffdhe2048" 00:19:10.148 } 00:19:10.148 } 00:19:10.148 ]' 00:19:10.148 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.148 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.148 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.409 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.409 09:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.409 09:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.409 09:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.409 09:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.409 09:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:19:11.352 09:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.352 09:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.352 09:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.352 09:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.352 09:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.352 09:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.352 09:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.352 09:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.352 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:11.352 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.352 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.352 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:11.352 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:11.352 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.352 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.352 09:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.352 09:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.352 09:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.352 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.352 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.613 00:19:11.613 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.613 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.613 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.873 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.873 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.873 09:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.873 09:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.873 09:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.873 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.873 { 00:19:11.873 "cntlid": 109, 00:19:11.873 "qid": 0, 00:19:11.873 "state": "enabled", 00:19:11.873 "listen_address": { 00:19:11.873 "trtype": "TCP", 00:19:11.873 "adrfam": "IPv4", 00:19:11.873 "traddr": "10.0.0.2", 00:19:11.873 "trsvcid": "4420" 00:19:11.873 }, 00:19:11.873 "peer_address": { 00:19:11.873 "trtype": "TCP", 00:19:11.873 "adrfam": "IPv4", 00:19:11.873 "traddr": "10.0.0.1", 00:19:11.873 "trsvcid": "52134" 00:19:11.873 }, 00:19:11.873 "auth": { 00:19:11.873 "state": "completed", 00:19:11.873 "digest": "sha512", 00:19:11.873 "dhgroup": "ffdhe2048" 00:19:11.873 } 00:19:11.873 } 00:19:11.873 ]' 00:19:11.873 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.134 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.134 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.134 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:12.134 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.134 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.134 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.134 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.134 09:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.075 09:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.336 00:19:13.597 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.597 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.597 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.597 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.597 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.597 09:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.597 09:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.597 09:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.597 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.597 { 00:19:13.597 "cntlid": 111, 00:19:13.597 "qid": 0, 00:19:13.597 "state": "enabled", 00:19:13.597 "listen_address": { 00:19:13.597 "trtype": "TCP", 00:19:13.597 "adrfam": "IPv4", 00:19:13.597 "traddr": "10.0.0.2", 00:19:13.597 "trsvcid": "4420" 00:19:13.597 }, 00:19:13.597 "peer_address": { 00:19:13.597 "trtype": "TCP", 00:19:13.597 "adrfam": "IPv4", 00:19:13.597 "traddr": "10.0.0.1", 00:19:13.597 "trsvcid": "52162" 00:19:13.597 }, 00:19:13.597 "auth": { 00:19:13.597 "state": "completed", 00:19:13.597 "digest": "sha512", 00:19:13.597 "dhgroup": "ffdhe2048" 00:19:13.597 } 00:19:13.597 } 00:19:13.597 ]' 00:19:13.597 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.858 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.858 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.858 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.858 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.858 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.858 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.858 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.119 09:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:19:14.690 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.690 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.690 09:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.690 09:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.690 09:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.690 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.690 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.690 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.690 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.951 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:14.951 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.951 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.951 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:14.951 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.951 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.951 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.951 09:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.951 09:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.951 09:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.951 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.951 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.212 00:19:15.212 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.212 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.212 09:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.473 09:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.473 09:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.473 09:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.473 09:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.473 09:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.473 09:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.473 { 00:19:15.473 "cntlid": 113, 00:19:15.473 "qid": 0, 00:19:15.473 "state": "enabled", 00:19:15.473 "listen_address": { 00:19:15.473 "trtype": "TCP", 00:19:15.473 "adrfam": "IPv4", 00:19:15.473 "traddr": "10.0.0.2", 00:19:15.473 "trsvcid": "4420" 00:19:15.473 }, 00:19:15.473 "peer_address": { 00:19:15.473 "trtype": "TCP", 00:19:15.473 "adrfam": "IPv4", 00:19:15.473 "traddr": "10.0.0.1", 00:19:15.473 "trsvcid": "52204" 00:19:15.473 }, 00:19:15.473 "auth": { 00:19:15.473 "state": "completed", 00:19:15.473 "digest": "sha512", 00:19:15.473 "dhgroup": "ffdhe3072" 00:19:15.473 } 00:19:15.473 } 00:19:15.473 ]' 00:19:15.473 09:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.473 09:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.473 09:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.734 09:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.734 09:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.734 09:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.734 09:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.734 09:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.994 09:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:19:16.566 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.566 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.566 09:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.566 09:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.566 09:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.566 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.566 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.566 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.825 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:16.825 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.825 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.825 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:16.825 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:16.825 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.825 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.825 09:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.825 09:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.825 09:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.825 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.825 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.086 00:19:17.086 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.086 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.086 09:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.347 09:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.347 09:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.347 09:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.347 09:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.347 09:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.347 09:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.347 { 00:19:17.347 "cntlid": 115, 00:19:17.347 "qid": 0, 00:19:17.347 "state": "enabled", 00:19:17.347 "listen_address": { 00:19:17.347 "trtype": "TCP", 00:19:17.347 "adrfam": "IPv4", 00:19:17.347 "traddr": "10.0.0.2", 00:19:17.347 "trsvcid": "4420" 00:19:17.347 }, 00:19:17.347 "peer_address": { 00:19:17.347 "trtype": "TCP", 00:19:17.347 "adrfam": "IPv4", 00:19:17.347 "traddr": "10.0.0.1", 00:19:17.347 "trsvcid": "52226" 00:19:17.347 }, 00:19:17.347 "auth": { 00:19:17.347 "state": "completed", 00:19:17.347 "digest": "sha512", 00:19:17.347 "dhgroup": "ffdhe3072" 00:19:17.347 } 00:19:17.347 } 00:19:17.347 ]' 00:19:17.347 09:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.347 09:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.347 09:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.347 09:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.347 09:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.347 09:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.347 09:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.347 09:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.608 09:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.550 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.810 00:19:18.810 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.810 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.810 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.070 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.070 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.071 09:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.071 09:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.071 09:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.071 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.071 { 00:19:19.071 "cntlid": 117, 00:19:19.071 "qid": 0, 00:19:19.071 "state": "enabled", 00:19:19.071 "listen_address": { 00:19:19.071 "trtype": "TCP", 00:19:19.071 "adrfam": "IPv4", 00:19:19.071 "traddr": "10.0.0.2", 00:19:19.071 "trsvcid": "4420" 00:19:19.071 }, 00:19:19.071 "peer_address": { 00:19:19.071 "trtype": "TCP", 00:19:19.071 "adrfam": "IPv4", 00:19:19.071 "traddr": "10.0.0.1", 00:19:19.071 "trsvcid": "35220" 00:19:19.071 }, 00:19:19.071 "auth": { 00:19:19.071 "state": "completed", 00:19:19.071 "digest": "sha512", 00:19:19.071 "dhgroup": "ffdhe3072" 00:19:19.071 } 00:19:19.071 } 00:19:19.071 ]' 00:19:19.071 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.071 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.071 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.071 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.332 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.332 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.332 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.332 09:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.332 09:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:19:20.274 09:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.274 09:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.274 09:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.274 09:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.274 09:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.274 09:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.274 09:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.274 09:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.274 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:20.274 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.274 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.274 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:20.274 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:20.274 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.274 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:20.274 09:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.274 09:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.274 09:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.274 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.274 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.551 00:19:20.814 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.814 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.814 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.814 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.814 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.814 09:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.814 09:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.814 09:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.814 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.814 { 00:19:20.814 "cntlid": 119, 00:19:20.814 "qid": 0, 00:19:20.814 "state": "enabled", 00:19:20.814 "listen_address": { 00:19:20.814 "trtype": "TCP", 00:19:20.814 "adrfam": "IPv4", 00:19:20.814 "traddr": "10.0.0.2", 00:19:20.814 "trsvcid": "4420" 00:19:20.814 }, 00:19:20.814 "peer_address": { 00:19:20.814 "trtype": "TCP", 00:19:20.814 "adrfam": "IPv4", 00:19:20.814 "traddr": "10.0.0.1", 00:19:20.814 "trsvcid": "35244" 00:19:20.814 }, 00:19:20.814 "auth": { 00:19:20.814 "state": "completed", 00:19:20.814 "digest": "sha512", 00:19:20.814 "dhgroup": "ffdhe3072" 00:19:20.814 } 00:19:20.814 } 00:19:20.814 ]' 00:19:20.814 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.075 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.076 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.076 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.076 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.076 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.076 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.076 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.336 09:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:19:21.907 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.907 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.907 09:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.907 09:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.907 09:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.907 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.907 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.907 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:21.907 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.167 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:22.167 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.167 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.167 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:22.167 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:22.167 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.168 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.168 09:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.168 09:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.168 09:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.168 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.168 09:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.428 00:19:22.428 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.428 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.428 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.689 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.689 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.689 09:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.689 09:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.689 09:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.689 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.689 { 00:19:22.689 "cntlid": 121, 00:19:22.689 "qid": 0, 00:19:22.689 "state": "enabled", 00:19:22.689 "listen_address": { 00:19:22.689 "trtype": "TCP", 00:19:22.689 "adrfam": "IPv4", 00:19:22.689 "traddr": "10.0.0.2", 00:19:22.689 "trsvcid": "4420" 00:19:22.689 }, 00:19:22.689 "peer_address": { 00:19:22.689 "trtype": "TCP", 00:19:22.689 "adrfam": "IPv4", 00:19:22.689 "traddr": "10.0.0.1", 00:19:22.689 "trsvcid": "35264" 00:19:22.689 }, 00:19:22.689 "auth": { 00:19:22.689 "state": "completed", 00:19:22.689 "digest": "sha512", 00:19:22.689 "dhgroup": "ffdhe4096" 00:19:22.689 } 00:19:22.689 } 00:19:22.689 ]' 00:19:22.689 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.689 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.689 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.957 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.957 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.957 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.957 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.957 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.244 09:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:19:23.817 09:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.817 09:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.817 09:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.818 09:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.818 09:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.818 09:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.818 09:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:23.818 09:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:24.079 09:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:24.079 09:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.079 09:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.079 09:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:24.079 09:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:24.079 09:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.079 09:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.079 09:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.079 09:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.079 09:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.079 09:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.079 09:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.340 00:19:24.340 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.340 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.340 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.601 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.601 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.601 09:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.601 09:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.601 09:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.601 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.601 { 00:19:24.601 "cntlid": 123, 00:19:24.601 "qid": 0, 00:19:24.601 "state": "enabled", 00:19:24.601 "listen_address": { 00:19:24.601 "trtype": "TCP", 00:19:24.601 "adrfam": "IPv4", 00:19:24.601 "traddr": "10.0.0.2", 00:19:24.601 "trsvcid": "4420" 00:19:24.601 }, 00:19:24.601 "peer_address": { 00:19:24.601 "trtype": "TCP", 00:19:24.601 "adrfam": "IPv4", 00:19:24.601 "traddr": "10.0.0.1", 00:19:24.601 "trsvcid": "35296" 00:19:24.601 }, 00:19:24.601 "auth": { 00:19:24.601 "state": "completed", 00:19:24.601 "digest": "sha512", 00:19:24.601 "dhgroup": "ffdhe4096" 00:19:24.601 } 00:19:24.601 } 00:19:24.601 ]' 00:19:24.601 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.601 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.601 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.601 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.601 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.601 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.601 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.601 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.860 09:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.801 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.062 00:19:26.062 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.062 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.062 09:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.323 09:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.323 09:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.323 09:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.323 09:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.323 09:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.323 09:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.323 { 00:19:26.323 "cntlid": 125, 00:19:26.323 "qid": 0, 00:19:26.323 "state": "enabled", 00:19:26.323 "listen_address": { 00:19:26.323 "trtype": "TCP", 00:19:26.323 "adrfam": "IPv4", 00:19:26.323 "traddr": "10.0.0.2", 00:19:26.323 "trsvcid": "4420" 00:19:26.323 }, 00:19:26.323 "peer_address": { 00:19:26.323 "trtype": "TCP", 00:19:26.323 "adrfam": "IPv4", 00:19:26.323 "traddr": "10.0.0.1", 00:19:26.323 "trsvcid": "35324" 00:19:26.323 }, 00:19:26.323 "auth": { 00:19:26.323 "state": "completed", 00:19:26.323 "digest": "sha512", 00:19:26.323 "dhgroup": "ffdhe4096" 00:19:26.323 } 00:19:26.323 } 00:19:26.323 ]' 00:19:26.323 09:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.323 09:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.323 09:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.583 09:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.583 09:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.583 09:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.583 09:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.583 09:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.844 09:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:19:27.415 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.415 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.415 09:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.415 09:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.415 09:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.415 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.415 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.415 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.676 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:27.676 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.676 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.676 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:27.676 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:27.676 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.676 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:27.676 09:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.676 09:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.676 09:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.676 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.676 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.937 00:19:27.937 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.937 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.937 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.197 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.197 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.197 09:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.197 09:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.197 09:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.197 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.197 { 00:19:28.197 "cntlid": 127, 00:19:28.197 "qid": 0, 00:19:28.197 "state": "enabled", 00:19:28.197 "listen_address": { 00:19:28.197 "trtype": "TCP", 00:19:28.197 "adrfam": "IPv4", 00:19:28.197 "traddr": "10.0.0.2", 00:19:28.197 "trsvcid": "4420" 00:19:28.197 }, 00:19:28.197 "peer_address": { 00:19:28.197 "trtype": "TCP", 00:19:28.197 "adrfam": "IPv4", 00:19:28.197 "traddr": "10.0.0.1", 00:19:28.197 "trsvcid": "35334" 00:19:28.197 }, 00:19:28.197 "auth": { 00:19:28.197 "state": "completed", 00:19:28.197 "digest": "sha512", 00:19:28.197 "dhgroup": "ffdhe4096" 00:19:28.197 } 00:19:28.197 } 00:19:28.197 ]' 00:19:28.197 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.197 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.197 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.197 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.197 09:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.457 09:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.457 09:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.457 09:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.457 09:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:19:29.398 09:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.399 09:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.399 09:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.399 09:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.399 09:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.399 09:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.399 09:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.399 09:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.399 09:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.399 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:29.399 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.399 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.399 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:29.399 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:29.399 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.399 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.399 09:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:29.399 09:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.399 09:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:29.399 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.399 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.970 00:19:29.970 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.970 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.970 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.230 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.230 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.230 09:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.230 09:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.230 09:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.230 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.230 { 00:19:30.230 "cntlid": 129, 00:19:30.230 "qid": 0, 00:19:30.230 "state": "enabled", 00:19:30.230 "listen_address": { 00:19:30.230 "trtype": "TCP", 00:19:30.230 "adrfam": "IPv4", 00:19:30.230 "traddr": "10.0.0.2", 00:19:30.230 "trsvcid": "4420" 00:19:30.230 }, 00:19:30.230 "peer_address": { 00:19:30.230 "trtype": "TCP", 00:19:30.230 "adrfam": "IPv4", 00:19:30.230 "traddr": "10.0.0.1", 00:19:30.230 "trsvcid": "48498" 00:19:30.230 }, 00:19:30.230 "auth": { 00:19:30.230 "state": "completed", 00:19:30.230 "digest": "sha512", 00:19:30.230 "dhgroup": "ffdhe6144" 00:19:30.230 } 00:19:30.230 } 00:19:30.230 ]' 00:19:30.230 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.230 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.230 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.230 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.230 09:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.230 09:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.230 09:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.230 09:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.490 09:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:19:31.430 09:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.430 09:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.430 09:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.430 09:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.430 09:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.430 09:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.430 09:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.430 09:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.430 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:31.430 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.430 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.430 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.430 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:31.430 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.430 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.430 09:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.430 09:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.430 09:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.430 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.430 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.001 00:19:32.001 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.002 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.002 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.002 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.002 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.002 09:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.002 09:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.261 09:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.261 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.261 { 00:19:32.261 "cntlid": 131, 00:19:32.261 "qid": 0, 00:19:32.261 "state": "enabled", 00:19:32.261 "listen_address": { 00:19:32.261 "trtype": "TCP", 00:19:32.261 "adrfam": "IPv4", 00:19:32.261 "traddr": "10.0.0.2", 00:19:32.261 "trsvcid": "4420" 00:19:32.261 }, 00:19:32.261 "peer_address": { 00:19:32.261 "trtype": "TCP", 00:19:32.261 "adrfam": "IPv4", 00:19:32.261 "traddr": "10.0.0.1", 00:19:32.261 "trsvcid": "48518" 00:19:32.261 }, 00:19:32.261 "auth": { 00:19:32.261 "state": "completed", 00:19:32.261 "digest": "sha512", 00:19:32.261 "dhgroup": "ffdhe6144" 00:19:32.261 } 00:19:32.261 } 00:19:32.261 ]' 00:19:32.261 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.261 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.261 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.261 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.261 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.261 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.261 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.262 09:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.522 09:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:19:33.093 09:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.093 09:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.093 09:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.093 09:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.093 09:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.093 09:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.093 09:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:33.093 09:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:33.353 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:33.353 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.353 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.353 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.353 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:33.353 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.353 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.353 09:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.353 09:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.353 09:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.353 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.353 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.923 00:19:33.923 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.923 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.923 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.923 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.923 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.923 09:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.923 09:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.923 09:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.923 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.923 { 00:19:33.923 "cntlid": 133, 00:19:33.923 "qid": 0, 00:19:33.923 "state": "enabled", 00:19:33.923 "listen_address": { 00:19:33.923 "trtype": "TCP", 00:19:33.923 "adrfam": "IPv4", 00:19:33.923 "traddr": "10.0.0.2", 00:19:33.923 "trsvcid": "4420" 00:19:33.923 }, 00:19:33.923 "peer_address": { 00:19:33.923 "trtype": "TCP", 00:19:33.923 "adrfam": "IPv4", 00:19:33.923 "traddr": "10.0.0.1", 00:19:33.923 "trsvcid": "48550" 00:19:33.923 }, 00:19:33.923 "auth": { 00:19:33.923 "state": "completed", 00:19:33.923 "digest": "sha512", 00:19:33.923 "dhgroup": "ffdhe6144" 00:19:33.923 } 00:19:33.923 } 00:19:33.923 ]' 00:19:33.923 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.183 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.183 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.183 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.183 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.183 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.183 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.183 09:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.443 09:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:19:35.015 09:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.015 09:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:35.015 09:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.015 09:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.015 09:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.015 09:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.015 09:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.015 09:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.275 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:35.275 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.275 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.275 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.275 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:35.275 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.275 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:35.275 09:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.275 09:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.275 09:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.275 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.275 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.846 00:19:35.846 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.846 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.846 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.846 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.847 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.847 09:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.847 09:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.847 09:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.847 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.847 { 00:19:35.847 "cntlid": 135, 00:19:35.847 "qid": 0, 00:19:35.847 "state": "enabled", 00:19:35.847 "listen_address": { 00:19:35.847 "trtype": "TCP", 00:19:35.847 "adrfam": "IPv4", 00:19:35.847 "traddr": "10.0.0.2", 00:19:35.847 "trsvcid": "4420" 00:19:35.847 }, 00:19:35.847 "peer_address": { 00:19:35.847 "trtype": "TCP", 00:19:35.847 "adrfam": "IPv4", 00:19:35.847 "traddr": "10.0.0.1", 00:19:35.847 "trsvcid": "48562" 00:19:35.847 }, 00:19:35.847 "auth": { 00:19:35.847 "state": "completed", 00:19:35.847 "digest": "sha512", 00:19:35.847 "dhgroup": "ffdhe6144" 00:19:35.847 } 00:19:35.847 } 00:19:35.847 ]' 00:19:35.847 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.107 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.107 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.107 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.107 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.107 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.107 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.107 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.367 09:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:19:36.938 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.938 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.938 09:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:36.938 09:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.938 09:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:36.938 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.938 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.938 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:36.938 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.199 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:37.199 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.199 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.199 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:37.199 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:37.199 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.199 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.199 09:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.199 09:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.199 09:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.199 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.199 09:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.770 00:19:37.770 09:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.770 09:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.770 09:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.057 09:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.057 09:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.057 09:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.057 09:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.057 09:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.057 09:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.057 { 00:19:38.057 "cntlid": 137, 00:19:38.057 "qid": 0, 00:19:38.057 "state": "enabled", 00:19:38.057 "listen_address": { 00:19:38.057 "trtype": "TCP", 00:19:38.057 "adrfam": "IPv4", 00:19:38.057 "traddr": "10.0.0.2", 00:19:38.057 "trsvcid": "4420" 00:19:38.057 }, 00:19:38.057 "peer_address": { 00:19:38.057 "trtype": "TCP", 00:19:38.057 "adrfam": "IPv4", 00:19:38.057 "traddr": "10.0.0.1", 00:19:38.057 "trsvcid": "48596" 00:19:38.057 }, 00:19:38.057 "auth": { 00:19:38.057 "state": "completed", 00:19:38.057 "digest": "sha512", 00:19:38.057 "dhgroup": "ffdhe8192" 00:19:38.057 } 00:19:38.057 } 00:19:38.057 ]' 00:19:38.057 09:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.057 09:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.057 09:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.319 09:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.319 09:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.319 09:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.319 09:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.319 09:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.580 09:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:19:39.150 09:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.151 09:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.151 09:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.151 09:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.151 09:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.151 09:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.151 09:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.151 09:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.411 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:39.411 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.411 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.411 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.411 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:39.411 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.411 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.411 09:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.411 09:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.411 09:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.411 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.411 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.982 00:19:39.982 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.982 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.982 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.243 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.243 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.243 09:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.243 09:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.243 09:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.243 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.243 { 00:19:40.243 "cntlid": 139, 00:19:40.243 "qid": 0, 00:19:40.243 "state": "enabled", 00:19:40.243 "listen_address": { 00:19:40.243 "trtype": "TCP", 00:19:40.243 "adrfam": "IPv4", 00:19:40.243 "traddr": "10.0.0.2", 00:19:40.243 "trsvcid": "4420" 00:19:40.243 }, 00:19:40.243 "peer_address": { 00:19:40.243 "trtype": "TCP", 00:19:40.243 "adrfam": "IPv4", 00:19:40.243 "traddr": "10.0.0.1", 00:19:40.243 "trsvcid": "35850" 00:19:40.243 }, 00:19:40.243 "auth": { 00:19:40.243 "state": "completed", 00:19:40.243 "digest": "sha512", 00:19:40.243 "dhgroup": "ffdhe8192" 00:19:40.243 } 00:19:40.243 } 00:19:40.243 ]' 00:19:40.243 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.243 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.243 09:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.243 09:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.243 09:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.504 09:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.504 09:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.504 09:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.504 09:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQwYzY4NDc3NDFjNmIxZmQ2OTM1N2E1MjhmMGNlYmZBnqjx: --dhchap-ctrl-secret DHHC-1:02:ZGZjMjZlN2YzMmRhMjdmNzlmNDYwZTA5NDYxZWRiYzk1ZDVkYzgyMGViOGUzNzA1cJlE/A==: 00:19:41.453 09:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.453 09:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.453 09:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.453 09:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.453 09:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.453 09:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.453 09:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.454 09:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.454 09:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:41.454 09:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.454 09:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.454 09:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:41.454 09:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:41.454 09:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.454 09:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.454 09:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.454 09:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.454 09:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.454 09:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.454 09:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.037 00:19:42.037 09:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.037 09:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.037 09:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.297 09:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.297 09:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.297 09:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.297 09:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.297 09:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.297 09:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.297 { 00:19:42.297 "cntlid": 141, 00:19:42.297 "qid": 0, 00:19:42.297 "state": "enabled", 00:19:42.297 "listen_address": { 00:19:42.297 "trtype": "TCP", 00:19:42.297 "adrfam": "IPv4", 00:19:42.297 "traddr": "10.0.0.2", 00:19:42.297 "trsvcid": "4420" 00:19:42.298 }, 00:19:42.298 "peer_address": { 00:19:42.298 "trtype": "TCP", 00:19:42.298 "adrfam": "IPv4", 00:19:42.298 "traddr": "10.0.0.1", 00:19:42.298 "trsvcid": "35878" 00:19:42.298 }, 00:19:42.298 "auth": { 00:19:42.298 "state": "completed", 00:19:42.298 "digest": "sha512", 00:19:42.298 "dhgroup": "ffdhe8192" 00:19:42.298 } 00:19:42.298 } 00:19:42.298 ]' 00:19:42.298 09:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.558 09:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.558 09:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.558 09:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:42.558 09:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.558 09:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.558 09:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.558 09:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.818 09:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:ZDU5MGEyMzkwZjhmNDY4ZmM1YjhhYjliZDk3YTU3MDkwNmE4YzU2ZmNkMTQ3NDM1H3XLlg==: --dhchap-ctrl-secret DHHC-1:01:Yjg4M2JhNDIxOWQ4NDBjNDRlN2IxZDRkM2IxZDM1MGVMXTYn: 00:19:43.389 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.389 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:43.389 09:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.389 09:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.389 09:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.389 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.389 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.389 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.650 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:43.650 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.650 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.650 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.650 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.650 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.650 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:43.650 09:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.650 09:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.650 09:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.650 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.650 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.221 00:19:44.221 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.221 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.221 09:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.481 09:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.481 09:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.481 09:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.481 09:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.481 09:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.481 09:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.481 { 00:19:44.481 "cntlid": 143, 00:19:44.481 "qid": 0, 00:19:44.481 "state": "enabled", 00:19:44.481 "listen_address": { 00:19:44.481 "trtype": "TCP", 00:19:44.481 "adrfam": "IPv4", 00:19:44.481 "traddr": "10.0.0.2", 00:19:44.481 "trsvcid": "4420" 00:19:44.481 }, 00:19:44.481 "peer_address": { 00:19:44.481 "trtype": "TCP", 00:19:44.481 "adrfam": "IPv4", 00:19:44.481 "traddr": "10.0.0.1", 00:19:44.481 "trsvcid": "35904" 00:19:44.481 }, 00:19:44.481 "auth": { 00:19:44.481 "state": "completed", 00:19:44.481 "digest": "sha512", 00:19:44.481 "dhgroup": "ffdhe8192" 00:19:44.481 } 00:19:44.481 } 00:19:44.481 ]' 00:19:44.481 09:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.481 09:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.481 09:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.481 09:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.481 09:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.742 09:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.742 09:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.742 09:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.742 09:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.683 09:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.254 00:19:46.516 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.516 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.516 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.516 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.516 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.516 09:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.516 09:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.516 09:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.516 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.516 { 00:19:46.516 "cntlid": 145, 00:19:46.516 "qid": 0, 00:19:46.516 "state": "enabled", 00:19:46.516 "listen_address": { 00:19:46.516 "trtype": "TCP", 00:19:46.516 "adrfam": "IPv4", 00:19:46.516 "traddr": "10.0.0.2", 00:19:46.516 "trsvcid": "4420" 00:19:46.516 }, 00:19:46.516 "peer_address": { 00:19:46.516 "trtype": "TCP", 00:19:46.516 "adrfam": "IPv4", 00:19:46.516 "traddr": "10.0.0.1", 00:19:46.516 "trsvcid": "35940" 00:19:46.516 }, 00:19:46.516 "auth": { 00:19:46.516 "state": "completed", 00:19:46.516 "digest": "sha512", 00:19:46.516 "dhgroup": "ffdhe8192" 00:19:46.516 } 00:19:46.516 } 00:19:46.516 ]' 00:19:46.516 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.776 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.776 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.776 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.776 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.776 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.776 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.776 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.037 09:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MmZjODNjN2VmMDRmMGU0YmJkNjExNjA0ZTY4ODgwNDBjNjExNGRiNzRlMTc5MDg1NOVK7A==: --dhchap-ctrl-secret DHHC-1:03:MmZmMDc0OTAwNDk4MDZlOTY5YjZkYWRlNTI3MTNkNDM0ODczYzIwNTg4MjZjNmRiYzYyMTU3ZWEyNjQ5NTY4Nwknc4c=: 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:47.609 09:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:48.182 request: 00:19:48.182 { 00:19:48.182 "name": "nvme0", 00:19:48.182 "trtype": "tcp", 00:19:48.182 "traddr": "10.0.0.2", 00:19:48.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:48.182 "adrfam": "ipv4", 00:19:48.182 "trsvcid": "4420", 00:19:48.182 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:48.182 "dhchap_key": "key2", 00:19:48.182 "method": "bdev_nvme_attach_controller", 00:19:48.182 "req_id": 1 00:19:48.182 } 00:19:48.182 Got JSON-RPC error response 00:19:48.182 response: 00:19:48.182 { 00:19:48.182 "code": -5, 00:19:48.182 "message": "Input/output error" 00:19:48.182 } 00:19:48.182 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:48.182 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:48.182 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:48.182 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:48.182 09:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.182 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.182 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.182 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.182 09:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.182 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.182 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.182 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.182 09:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:48.182 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:48.443 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:48.443 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:48.443 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:48.443 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:48.443 09:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:48.443 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:48.443 09:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:49.015 request: 00:19:49.015 { 00:19:49.015 "name": "nvme0", 00:19:49.015 "trtype": "tcp", 00:19:49.015 "traddr": "10.0.0.2", 00:19:49.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:49.015 "adrfam": "ipv4", 00:19:49.015 "trsvcid": "4420", 00:19:49.015 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:49.015 "dhchap_key": "key1", 00:19:49.015 "dhchap_ctrlr_key": "ckey2", 00:19:49.015 "method": "bdev_nvme_attach_controller", 00:19:49.015 "req_id": 1 00:19:49.015 } 00:19:49.015 Got JSON-RPC error response 00:19:49.015 response: 00:19:49.015 { 00:19:49.015 "code": -5, 00:19:49.015 "message": "Input/output error" 00:19:49.015 } 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.015 09:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.588 request: 00:19:49.588 { 00:19:49.588 "name": "nvme0", 00:19:49.588 "trtype": "tcp", 00:19:49.588 "traddr": "10.0.0.2", 00:19:49.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:49.588 "adrfam": "ipv4", 00:19:49.588 "trsvcid": "4420", 00:19:49.588 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:49.588 "dhchap_key": "key1", 00:19:49.588 "dhchap_ctrlr_key": "ckey1", 00:19:49.588 "method": "bdev_nvme_attach_controller", 00:19:49.588 "req_id": 1 00:19:49.588 } 00:19:49.588 Got JSON-RPC error response 00:19:49.588 response: 00:19:49.588 { 00:19:49.588 "code": -5, 00:19:49.588 "message": "Input/output error" 00:19:49.588 } 00:19:49.588 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:49.588 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:49.588 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:49.588 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1126504 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1126504 ']' 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1126504 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1126504 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1126504' 00:19:49.589 killing process with pid 1126504 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1126504 00:19:49.589 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1126504 00:19:49.849 09:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:49.849 09:34:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:49.849 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:49.849 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.849 09:34:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1163627 00:19:49.849 09:34:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1163627 00:19:49.849 09:34:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:49.849 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1163627 ']' 00:19:49.849 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.849 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:49.849 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.849 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:49.849 09:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1163627 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1163627 ']' 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:50.792 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:19:50.793 09:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:50.793 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.793 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.054 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.054 09:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:51.054 09:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.054 09:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:51.054 09:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:51.054 09:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:51.054 09:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.054 09:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:51.054 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.054 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.054 09:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.054 09:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.054 09:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.625 00:19:51.625 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.625 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.625 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.886 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.886 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.886 09:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.886 09:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.886 09:34:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.886 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.886 { 00:19:51.886 "cntlid": 1, 00:19:51.886 "qid": 0, 00:19:51.886 "state": "enabled", 00:19:51.886 "listen_address": { 00:19:51.886 "trtype": "TCP", 00:19:51.886 "adrfam": "IPv4", 00:19:51.886 "traddr": "10.0.0.2", 00:19:51.886 "trsvcid": "4420" 00:19:51.886 }, 00:19:51.886 "peer_address": { 00:19:51.886 "trtype": "TCP", 00:19:51.886 "adrfam": "IPv4", 00:19:51.886 "traddr": "10.0.0.1", 00:19:51.886 "trsvcid": "37700" 00:19:51.886 }, 00:19:51.886 "auth": { 00:19:51.886 "state": "completed", 00:19:51.886 "digest": "sha512", 00:19:51.886 "dhgroup": "ffdhe8192" 00:19:51.886 } 00:19:51.886 } 00:19:51.886 ]' 00:19:51.886 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.886 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.886 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.886 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.886 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.886 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.886 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.886 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.147 09:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MGI1ZWQwMWIxZjcwNWJiODc5ZTNkM2RhYTY0MjBlZDYyNzExODBlYTNjNDAyYmUwZTU5NTVmN2JiYzJhMDkxZIZ6OA8=: 00:19:52.763 09:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.023 09:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.284 request: 00:19:53.284 { 00:19:53.284 "name": "nvme0", 00:19:53.284 "trtype": "tcp", 00:19:53.284 "traddr": "10.0.0.2", 00:19:53.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:53.284 "adrfam": "ipv4", 00:19:53.284 "trsvcid": "4420", 00:19:53.284 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:53.284 "dhchap_key": "key3", 00:19:53.284 "method": "bdev_nvme_attach_controller", 00:19:53.284 "req_id": 1 00:19:53.284 } 00:19:53.284 Got JSON-RPC error response 00:19:53.284 response: 00:19:53.284 { 00:19:53.284 "code": -5, 00:19:53.284 "message": "Input/output error" 00:19:53.284 } 00:19:53.284 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:53.284 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:53.284 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:53.284 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:53.284 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:53.284 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:53.284 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:53.284 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:53.545 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.545 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:53.545 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.545 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:53.545 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:53.545 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:53.545 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:53.545 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.545 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.806 request: 00:19:53.806 { 00:19:53.806 "name": "nvme0", 00:19:53.806 "trtype": "tcp", 00:19:53.806 "traddr": "10.0.0.2", 00:19:53.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:53.806 "adrfam": "ipv4", 00:19:53.806 "trsvcid": "4420", 00:19:53.806 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:53.806 "dhchap_key": "key3", 00:19:53.806 "method": "bdev_nvme_attach_controller", 00:19:53.806 "req_id": 1 00:19:53.806 } 00:19:53.806 Got JSON-RPC error response 00:19:53.806 response: 00:19:53.806 { 00:19:53.806 "code": -5, 00:19:53.806 "message": "Input/output error" 00:19:53.806 } 00:19:53.806 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:53.806 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:53.806 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:53.806 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:53.806 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:53.806 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:53.806 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:53.806 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:53.806 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:53.806 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:54.069 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:54.330 request: 00:19:54.330 { 00:19:54.330 "name": "nvme0", 00:19:54.330 "trtype": "tcp", 00:19:54.330 "traddr": "10.0.0.2", 00:19:54.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:54.330 "adrfam": "ipv4", 00:19:54.330 "trsvcid": "4420", 00:19:54.330 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:54.330 "dhchap_key": "key0", 00:19:54.330 "dhchap_ctrlr_key": "key1", 00:19:54.330 "method": "bdev_nvme_attach_controller", 00:19:54.330 "req_id": 1 00:19:54.330 } 00:19:54.330 Got JSON-RPC error response 00:19:54.330 response: 00:19:54.330 { 00:19:54.330 "code": -5, 00:19:54.330 "message": "Input/output error" 00:19:54.330 } 00:19:54.330 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:54.330 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:54.330 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:54.330 09:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:54.330 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:54.330 09:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:54.591 00:19:54.591 09:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:54.591 09:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:54.591 09:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.591 09:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.591 09:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.591 09:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.852 09:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:54.852 09:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:54.852 09:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1126846 00:19:54.852 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1126846 ']' 00:19:54.852 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1126846 00:19:54.852 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:19:54.852 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:54.852 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1126846 00:19:55.113 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:19:55.113 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:19:55.113 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1126846' 00:19:55.113 killing process with pid 1126846 00:19:55.113 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1126846 00:19:55.113 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1126846 00:19:55.113 09:34:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:55.113 09:34:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:55.113 09:34:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:55.113 09:34:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:55.113 09:34:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:55.113 09:34:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:55.113 09:34:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:55.113 rmmod nvme_tcp 00:19:55.113 rmmod nvme_fabrics 00:19:55.374 rmmod nvme_keyring 00:19:55.374 09:34:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:55.374 09:34:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:55.374 09:34:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:55.374 09:34:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1163627 ']' 00:19:55.374 09:34:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1163627 00:19:55.374 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1163627 ']' 00:19:55.374 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1163627 00:19:55.374 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:19:55.374 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:55.374 09:34:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1163627 00:19:55.374 09:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:55.374 09:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:55.374 09:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1163627' 00:19:55.374 killing process with pid 1163627 00:19:55.374 09:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1163627 00:19:55.374 09:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1163627 00:19:55.374 09:34:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:55.374 09:34:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:55.374 09:34:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:55.374 09:34:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:55.374 09:34:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:55.374 09:34:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.374 09:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.374 09:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.919 09:34:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:57.919 09:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.hjv /tmp/spdk.key-sha256.Zz1 /tmp/spdk.key-sha384.ugt /tmp/spdk.key-sha512.bcl /tmp/spdk.key-sha512.RVK /tmp/spdk.key-sha384.CG6 /tmp/spdk.key-sha256.dWg '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:57.920 00:19:57.920 real 2m36.830s 00:19:57.920 user 5m58.871s 00:19:57.920 sys 0m20.588s 00:19:57.920 09:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:57.920 09:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.920 ************************************ 00:19:57.920 END TEST nvmf_auth_target 00:19:57.920 ************************************ 00:19:57.920 09:34:29 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:57.920 09:34:29 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:57.920 09:34:29 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:19:57.920 09:34:29 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:57.920 09:34:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:57.920 ************************************ 00:19:57.920 START TEST nvmf_bdevio_no_huge 00:19:57.920 ************************************ 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:57.920 * Looking for test storage... 00:19:57.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:57.920 09:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:04.526 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:04.526 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.526 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:04.527 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:04.527 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:04.527 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:04.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:20:04.788 00:20:04.788 --- 10.0.0.2 ping statistics --- 00:20:04.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.788 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:04.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:20:04.788 00:20:04.788 --- 10.0.0.1 ping statistics --- 00:20:04.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.788 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:04.788 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:05.048 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:05.048 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:05.048 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:05.048 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.048 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1168847 00:20:05.048 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1168847 00:20:05.048 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:05.048 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 1168847 ']' 00:20:05.048 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.048 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:05.048 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.048 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:05.048 09:34:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.048 [2024-06-11 09:34:36.682994] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:20:05.048 [2024-06-11 09:34:36.683051] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:05.048 [2024-06-11 09:34:36.771046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:05.308 [2024-06-11 09:34:36.867711] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.308 [2024-06-11 09:34:36.867747] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.308 [2024-06-11 09:34:36.867755] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.308 [2024-06-11 09:34:36.867761] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.308 [2024-06-11 09:34:36.867767] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.308 [2024-06-11 09:34:36.867916] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:20:05.308 [2024-06-11 09:34:36.868069] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:20:05.308 [2024-06-11 09:34:36.868222] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:20:05.308 [2024-06-11 09:34:36.868223] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.879 [2024-06-11 09:34:37.620983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.879 Malloc0 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.879 [2024-06-11 09:34:37.674625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.879 { 00:20:05.879 "params": { 00:20:05.879 "name": "Nvme$subsystem", 00:20:05.879 "trtype": "$TEST_TRANSPORT", 00:20:05.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.879 "adrfam": "ipv4", 00:20:05.879 "trsvcid": "$NVMF_PORT", 00:20:05.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.879 "hdgst": ${hdgst:-false}, 00:20:05.879 "ddgst": ${ddgst:-false} 00:20:05.879 }, 00:20:05.879 "method": "bdev_nvme_attach_controller" 00:20:05.879 } 00:20:05.879 EOF 00:20:05.879 )") 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:05.879 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:06.139 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:06.139 09:34:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:06.139 "params": { 00:20:06.139 "name": "Nvme1", 00:20:06.139 "trtype": "tcp", 00:20:06.140 "traddr": "10.0.0.2", 00:20:06.140 "adrfam": "ipv4", 00:20:06.140 "trsvcid": "4420", 00:20:06.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.140 "hdgst": false, 00:20:06.140 "ddgst": false 00:20:06.140 }, 00:20:06.140 "method": "bdev_nvme_attach_controller" 00:20:06.140 }' 00:20:06.140 [2024-06-11 09:34:37.729959] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:20:06.140 [2024-06-11 09:34:37.730028] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1169042 ] 00:20:06.140 [2024-06-11 09:34:37.816043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:06.140 [2024-06-11 09:34:37.922499] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.140 [2024-06-11 09:34:37.922757] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.140 [2024-06-11 09:34:37.922763] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.711 I/O targets: 00:20:06.711 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:06.711 00:20:06.711 00:20:06.711 CUnit - A unit testing framework for C - Version 2.1-3 00:20:06.711 http://cunit.sourceforge.net/ 00:20:06.711 00:20:06.711 00:20:06.711 Suite: bdevio tests on: Nvme1n1 00:20:06.711 Test: blockdev write read block ...passed 00:20:06.711 Test: blockdev write zeroes read block ...passed 00:20:06.711 Test: blockdev write zeroes read no split ...passed 00:20:06.711 Test: blockdev write zeroes read split ...passed 00:20:06.711 Test: blockdev write zeroes read split partial ...passed 00:20:06.711 Test: blockdev reset ...[2024-06-11 09:34:38.367670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:06.711 [2024-06-11 09:34:38.367733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x52caf0 (9): Bad file descriptor 00:20:06.711 [2024-06-11 09:34:38.384078] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:06.711 passed 00:20:06.711 Test: blockdev write read 8 blocks ...passed 00:20:06.711 Test: blockdev write read size > 128k ...passed 00:20:06.711 Test: blockdev write read invalid size ...passed 00:20:06.711 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:06.711 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:06.711 Test: blockdev write read max offset ...passed 00:20:06.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:06.971 Test: blockdev writev readv 8 blocks ...passed 00:20:06.971 Test: blockdev writev readv 30 x 1block ...passed 00:20:06.971 Test: blockdev writev readv block ...passed 00:20:06.971 Test: blockdev writev readv size > 128k ...passed 00:20:06.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:06.971 Test: blockdev comparev and writev ...[2024-06-11 09:34:38.775506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.971 [2024-06-11 09:34:38.775531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.971 [2024-06-11 09:34:38.775542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.971 [2024-06-11 09:34:38.775548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:06.971 [2024-06-11 09:34:38.776103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.971 [2024-06-11 09:34:38.776112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:06.971 [2024-06-11 09:34:38.776121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.971 [2024-06-11 09:34:38.776127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:06.971 [2024-06-11 09:34:38.776614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.971 [2024-06-11 09:34:38.776622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:06.971 [2024-06-11 09:34:38.776631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.972 [2024-06-11 09:34:38.776637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:06.972 [2024-06-11 09:34:38.777155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.972 [2024-06-11 09:34:38.777165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:06.972 [2024-06-11 09:34:38.777174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.972 [2024-06-11 09:34:38.777180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:07.231 passed 00:20:07.232 Test: blockdev nvme passthru rw ...passed 00:20:07.232 Test: blockdev nvme passthru vendor specific ...[2024-06-11 09:34:38.863077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.232 [2024-06-11 09:34:38.863089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:07.232 [2024-06-11 09:34:38.863475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.232 [2024-06-11 09:34:38.863484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:07.232 [2024-06-11 09:34:38.863841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.232 [2024-06-11 09:34:38.863849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:07.232 [2024-06-11 09:34:38.864253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.232 [2024-06-11 09:34:38.864260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:07.232 passed 00:20:07.232 Test: blockdev nvme admin passthru ...passed 00:20:07.232 Test: blockdev copy ...passed 00:20:07.232 00:20:07.232 Run Summary: Type Total Ran Passed Failed Inactive 00:20:07.232 suites 1 1 n/a 0 0 00:20:07.232 tests 23 23 23 0 0 00:20:07.232 asserts 152 152 152 0 n/a 00:20:07.232 00:20:07.232 Elapsed time = 1.370 seconds 00:20:07.491 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:07.491 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:07.492 rmmod nvme_tcp 00:20:07.492 rmmod nvme_fabrics 00:20:07.492 rmmod nvme_keyring 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1168847 ']' 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1168847 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 1168847 ']' 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 1168847 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:07.492 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1168847 00:20:07.752 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:20:07.752 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:20:07.752 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1168847' 00:20:07.752 killing process with pid 1168847 00:20:07.752 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 1168847 00:20:07.752 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 1168847 00:20:08.013 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:08.013 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:08.013 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:08.013 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.013 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.013 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.013 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.013 09:34:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.557 09:34:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.557 00:20:10.557 real 0m12.468s 00:20:10.557 user 0m15.602s 00:20:10.557 sys 0m6.488s 00:20:10.557 09:34:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:10.557 09:34:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.557 ************************************ 00:20:10.557 END TEST nvmf_bdevio_no_huge 00:20:10.557 ************************************ 00:20:10.557 09:34:41 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:10.557 09:34:41 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:10.557 09:34:41 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:10.557 09:34:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:10.557 ************************************ 00:20:10.557 START TEST nvmf_tls 00:20:10.557 ************************************ 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:10.557 * Looking for test storage... 00:20:10.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.557 09:34:41 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.558 09:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.558 09:34:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.558 09:34:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:10.558 09:34:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:10.558 09:34:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.558 09:34:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:17.218 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:17.218 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:17.218 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:17.218 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.218 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.219 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.219 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:17.219 09:34:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:17.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:20:17.480 00:20:17.480 --- 10.0.0.2 ping statistics --- 00:20:17.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.480 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:20:17.480 00:20:17.480 --- 10.0.0.1 ping statistics --- 00:20:17.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.480 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1173588 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1173588 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1173588 ']' 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:17.480 09:34:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.480 [2024-06-11 09:34:49.190069] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:20:17.480 [2024-06-11 09:34:49.190133] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.480 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.480 [2024-06-11 09:34:49.262279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.741 [2024-06-11 09:34:49.334741] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.741 [2024-06-11 09:34:49.334778] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.741 [2024-06-11 09:34:49.334785] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.741 [2024-06-11 09:34:49.334791] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.741 [2024-06-11 09:34:49.334797] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.741 [2024-06-11 09:34:49.334817] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.312 09:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:18.312 09:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:18.312 09:34:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.312 09:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:18.312 09:34:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.312 09:34:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.312 09:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:18.312 09:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:18.573 true 00:20:18.573 09:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:18.573 09:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:18.834 09:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:18.834 09:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:18.834 09:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:19.095 09:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:19.095 09:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:19.095 09:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:19.095 09:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:19.095 09:34:50 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:19.355 09:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:19.355 09:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:19.616 09:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:19.616 09:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:19.616 09:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:19.617 09:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:19.877 09:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:19.877 09:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:19.877 09:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:20.138 09:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:20.138 09:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:20.138 09:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:20.138 09:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:20.138 09:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:20.398 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:20.398 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.qtDdo77mdR 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.ufo6izfGeK 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.qtDdo77mdR 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ufo6izfGeK 00:20:20.659 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:20.919 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:21.179 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.qtDdo77mdR 00:20:21.180 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.qtDdo77mdR 00:20:21.180 09:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:21.440 [2024-06-11 09:34:53.048566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.440 09:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:21.701 09:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:21.701 [2024-06-11 09:34:53.453580] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.701 [2024-06-11 09:34:53.453784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.701 09:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:21.963 malloc0 00:20:21.963 09:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:22.223 09:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qtDdo77mdR 00:20:22.224 [2024-06-11 09:34:54.005699] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:22.224 09:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.qtDdo77mdR 00:20:22.484 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.487 Initializing NVMe Controllers 00:20:32.487 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:32.487 Initialization complete. Launching workers. 00:20:32.487 ======================================================== 00:20:32.487 Latency(us) 00:20:32.487 Device Information : IOPS MiB/s Average min max 00:20:32.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13464.79 52.60 4753.82 1132.29 7694.12 00:20:32.487 ======================================================== 00:20:32.487 Total : 13464.79 52.60 4753.82 1132.29 7694.12 00:20:32.487 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qtDdo77mdR 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qtDdo77mdR' 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1176434 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1176434 /var/tmp/bdevperf.sock 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1176434 ']' 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:32.487 09:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.487 [2024-06-11 09:35:04.185328] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:20:32.487 [2024-06-11 09:35:04.185383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1176434 ] 00:20:32.487 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.487 [2024-06-11 09:35:04.235706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.487 [2024-06-11 09:35:04.288112] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.748 09:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:32.748 09:35:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:32.748 09:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qtDdo77mdR 00:20:32.748 [2024-06-11 09:35:04.543972] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:32.748 [2024-06-11 09:35:04.544029] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:33.009 TLSTESTn1 00:20:33.009 09:35:04 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:33.009 Running I/O for 10 seconds... 00:20:43.008 00:20:43.008 Latency(us) 00:20:43.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.008 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:43.008 Verification LBA range: start 0x0 length 0x2000 00:20:43.008 TLSTESTn1 : 10.05 4130.38 16.13 0.00 0.00 30906.81 4560.21 46530.56 00:20:43.008 =================================================================================================================== 00:20:43.008 Total : 4130.38 16.13 0.00 0.00 30906.81 4560.21 46530.56 00:20:43.008 0 00:20:43.269 09:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:43.269 09:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1176434 00:20:43.269 09:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1176434 ']' 00:20:43.269 09:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1176434 00:20:43.269 09:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:43.269 09:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:43.269 09:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1176434 00:20:43.269 09:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:43.269 09:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:43.269 09:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1176434' 00:20:43.269 killing process with pid 1176434 00:20:43.269 09:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1176434 00:20:43.269 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.269 00:20:43.269 Latency(us) 00:20:43.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.269 =================================================================================================================== 00:20:43.269 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.269 [2024-06-11 09:35:14.890177] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:43.269 09:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1176434 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ufo6izfGeK 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ufo6izfGeK 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ufo6izfGeK 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ufo6izfGeK' 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1178618 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1178618 /var/tmp/bdevperf.sock 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1178618 ']' 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:43.269 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.269 [2024-06-11 09:35:15.030621] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:20:43.269 [2024-06-11 09:35:15.030666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178618 ] 00:20:43.270 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.270 [2024-06-11 09:35:15.072345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.529 [2024-06-11 09:35:15.124056] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.529 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:43.529 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:43.529 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ufo6izfGeK 00:20:43.790 [2024-06-11 09:35:15.412110] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.790 [2024-06-11 09:35:15.412164] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:43.790 [2024-06-11 09:35:15.416482] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:43.790 [2024-06-11 09:35:15.417088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb76de0 (107): Transport endpoint is not connected 00:20:43.790 [2024-06-11 09:35:15.418083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb76de0 (9): Bad file descriptor 00:20:43.790 [2024-06-11 09:35:15.419085] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:43.790 [2024-06-11 09:35:15.419092] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:43.790 [2024-06-11 09:35:15.419098] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:43.790 request: 00:20:43.790 { 00:20:43.790 "name": "TLSTEST", 00:20:43.790 "trtype": "tcp", 00:20:43.790 "traddr": "10.0.0.2", 00:20:43.790 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.790 "adrfam": "ipv4", 00:20:43.790 "trsvcid": "4420", 00:20:43.790 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.790 "psk": "/tmp/tmp.ufo6izfGeK", 00:20:43.790 "method": "bdev_nvme_attach_controller", 00:20:43.790 "req_id": 1 00:20:43.790 } 00:20:43.790 Got JSON-RPC error response 00:20:43.790 response: 00:20:43.790 { 00:20:43.790 "code": -5, 00:20:43.790 "message": "Input/output error" 00:20:43.790 } 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1178618 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1178618 ']' 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1178618 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1178618 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1178618' 00:20:43.790 killing process with pid 1178618 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1178618 00:20:43.790 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.790 00:20:43.790 Latency(us) 00:20:43.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.790 =================================================================================================================== 00:20:43.790 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:43.790 [2024-06-11 09:35:15.487850] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1178618 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qtDdo77mdR 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qtDdo77mdR 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:43.790 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qtDdo77mdR 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qtDdo77mdR' 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1178780 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1178780 /var/tmp/bdevperf.sock 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1178780 ']' 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:43.791 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.051 [2024-06-11 09:35:15.618611] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:20:44.051 [2024-06-11 09:35:15.618655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178780 ] 00:20:44.051 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.051 [2024-06-11 09:35:15.660032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.051 [2024-06-11 09:35:15.711418] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.051 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:44.051 09:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:44.051 09:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.qtDdo77mdR 00:20:44.314 [2024-06-11 09:35:15.995401] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.314 [2024-06-11 09:35:15.995455] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:44.314 [2024-06-11 09:35:15.999790] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:44.314 [2024-06-11 09:35:15.999813] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:44.314 [2024-06-11 09:35:15.999836] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:44.314 [2024-06-11 09:35:16.000516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b48de0 (107): Transport endpoint is not connected 00:20:44.314 [2024-06-11 09:35:16.001511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b48de0 (9): Bad file descriptor 00:20:44.314 [2024-06-11 09:35:16.002512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:44.314 [2024-06-11 09:35:16.002520] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:44.314 [2024-06-11 09:35:16.002527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:44.314 request: 00:20:44.314 { 00:20:44.314 "name": "TLSTEST", 00:20:44.314 "trtype": "tcp", 00:20:44.314 "traddr": "10.0.0.2", 00:20:44.314 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:44.314 "adrfam": "ipv4", 00:20:44.314 "trsvcid": "4420", 00:20:44.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.314 "psk": "/tmp/tmp.qtDdo77mdR", 00:20:44.314 "method": "bdev_nvme_attach_controller", 00:20:44.314 "req_id": 1 00:20:44.314 } 00:20:44.314 Got JSON-RPC error response 00:20:44.314 response: 00:20:44.314 { 00:20:44.315 "code": -5, 00:20:44.315 "message": "Input/output error" 00:20:44.315 } 00:20:44.315 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1178780 00:20:44.315 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1178780 ']' 00:20:44.315 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1178780 00:20:44.315 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:44.315 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:44.315 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1178780 00:20:44.315 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:44.315 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:44.315 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1178780' 00:20:44.315 killing process with pid 1178780 00:20:44.315 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1178780 00:20:44.315 Received shutdown signal, test time was about 10.000000 seconds 00:20:44.315 00:20:44.315 Latency(us) 00:20:44.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.315 =================================================================================================================== 00:20:44.315 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:44.315 [2024-06-11 09:35:16.070422] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:44.315 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1178780 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qtDdo77mdR 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qtDdo77mdR 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qtDdo77mdR 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qtDdo77mdR' 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1178796 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1178796 /var/tmp/bdevperf.sock 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1178796 ']' 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.605 [2024-06-11 09:35:16.234605] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:20:44.605 [2024-06-11 09:35:16.234676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178796 ] 00:20:44.605 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.605 [2024-06-11 09:35:16.283621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.605 [2024-06-11 09:35:16.335425] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:44.605 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qtDdo77mdR 00:20:44.866 [2024-06-11 09:35:16.575144] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.866 [2024-06-11 09:35:16.575198] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:44.866 [2024-06-11 09:35:16.579469] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:44.866 [2024-06-11 09:35:16.579493] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:44.866 [2024-06-11 09:35:16.579517] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:44.866 [2024-06-11 09:35:16.580131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4fde0 (107): Transport endpoint is not connected 00:20:44.866 [2024-06-11 09:35:16.581125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4fde0 (9): Bad file descriptor 00:20:44.866 [2024-06-11 09:35:16.582127] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:44.866 [2024-06-11 09:35:16.582136] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:44.866 [2024-06-11 09:35:16.582143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:44.866 request: 00:20:44.866 { 00:20:44.866 "name": "TLSTEST", 00:20:44.866 "trtype": "tcp", 00:20:44.866 "traddr": "10.0.0.2", 00:20:44.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.866 "adrfam": "ipv4", 00:20:44.866 "trsvcid": "4420", 00:20:44.866 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:44.866 "psk": "/tmp/tmp.qtDdo77mdR", 00:20:44.866 "method": "bdev_nvme_attach_controller", 00:20:44.866 "req_id": 1 00:20:44.866 } 00:20:44.866 Got JSON-RPC error response 00:20:44.866 response: 00:20:44.866 { 00:20:44.866 "code": -5, 00:20:44.866 "message": "Input/output error" 00:20:44.866 } 00:20:44.866 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1178796 00:20:44.866 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1178796 ']' 00:20:44.866 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1178796 00:20:44.866 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:44.866 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:44.866 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1178796 00:20:44.866 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:44.867 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:44.867 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1178796' 00:20:44.867 killing process with pid 1178796 00:20:44.867 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1178796 00:20:44.867 Received shutdown signal, test time was about 10.000000 seconds 00:20:44.867 00:20:44.867 Latency(us) 00:20:44.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.867 =================================================================================================================== 00:20:44.867 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:44.867 [2024-06-11 09:35:16.653336] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:44.867 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1178796 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1178995 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1178995 /var/tmp/bdevperf.sock 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1178995 ']' 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:45.128 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.128 [2024-06-11 09:35:16.781801] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:20:45.128 [2024-06-11 09:35:16.781845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178995 ] 00:20:45.128 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.128 [2024-06-11 09:35:16.823356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.128 [2024-06-11 09:35:16.874909] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.389 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:45.389 09:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:45.389 09:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:45.389 [2024-06-11 09:35:17.170170] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:45.389 [2024-06-11 09:35:17.171595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd8820 (9): Bad file descriptor 00:20:45.389 [2024-06-11 09:35:17.172595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:45.389 [2024-06-11 09:35:17.172603] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:45.389 [2024-06-11 09:35:17.172610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:45.389 request: 00:20:45.389 { 00:20:45.389 "name": "TLSTEST", 00:20:45.389 "trtype": "tcp", 00:20:45.389 "traddr": "10.0.0.2", 00:20:45.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.389 "adrfam": "ipv4", 00:20:45.389 "trsvcid": "4420", 00:20:45.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.389 "method": "bdev_nvme_attach_controller", 00:20:45.389 "req_id": 1 00:20:45.389 } 00:20:45.389 Got JSON-RPC error response 00:20:45.389 response: 00:20:45.389 { 00:20:45.389 "code": -5, 00:20:45.389 "message": "Input/output error" 00:20:45.389 } 00:20:45.389 09:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1178995 00:20:45.389 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1178995 ']' 00:20:45.389 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1178995 00:20:45.389 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:45.389 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:45.389 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1178995 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1178995' 00:20:45.649 killing process with pid 1178995 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1178995 00:20:45.649 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.649 00:20:45.649 Latency(us) 00:20:45.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.649 =================================================================================================================== 00:20:45.649 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1178995 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1173588 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1173588 ']' 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1173588 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1173588 00:20:45.649 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:45.650 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:45.650 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1173588' 00:20:45.650 killing process with pid 1173588 00:20:45.650 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1173588 00:20:45.650 [2024-06-11 09:35:17.402914] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:45.650 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1173588 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.lRIeTRrdM7 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.lRIeTRrdM7 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1179151 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1179151 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1179151 ']' 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:45.910 09:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.911 [2024-06-11 09:35:17.668897] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:20:45.911 [2024-06-11 09:35:17.668954] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.911 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.170 [2024-06-11 09:35:17.735395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.170 [2024-06-11 09:35:17.799736] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.171 [2024-06-11 09:35:17.799772] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.171 [2024-06-11 09:35:17.799780] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.171 [2024-06-11 09:35:17.799786] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.171 [2024-06-11 09:35:17.799791] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.171 [2024-06-11 09:35:17.799811] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.742 09:35:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:46.742 09:35:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:46.742 09:35:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:46.742 09:35:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:46.742 09:35:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.742 09:35:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.003 09:35:18 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.lRIeTRrdM7 00:20:47.003 09:35:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lRIeTRrdM7 00:20:47.003 09:35:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:47.003 [2024-06-11 09:35:18.746699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.003 09:35:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:47.264 09:35:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:47.524 [2024-06-11 09:35:19.099584] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:47.524 [2024-06-11 09:35:19.099809] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.524 09:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:47.524 malloc0 00:20:47.524 09:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lRIeTRrdM7 00:20:47.784 [2024-06-11 09:35:19.535212] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lRIeTRrdM7 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lRIeTRrdM7' 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1179513 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1179513 /var/tmp/bdevperf.sock 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1179513 ']' 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:47.784 09:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.785 09:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:47.785 09:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 [2024-06-11 09:35:19.582767] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:20:47.785 [2024-06-11 09:35:19.582815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179513 ] 00:20:48.045 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.045 [2024-06-11 09:35:19.632402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.045 [2024-06-11 09:35:19.684142] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.045 09:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:48.045 09:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:48.045 09:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lRIeTRrdM7 00:20:48.305 [2024-06-11 09:35:19.899543] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.305 [2024-06-11 09:35:19.899605] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:48.305 TLSTESTn1 00:20:48.305 09:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:48.305 Running I/O for 10 seconds... 00:21:00.537 00:21:00.537 Latency(us) 00:21:00.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.537 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:00.537 Verification LBA range: start 0x0 length 0x2000 00:21:00.537 TLSTESTn1 : 10.02 4968.57 19.41 0.00 0.00 25720.07 5898.24 51336.53 00:21:00.537 =================================================================================================================== 00:21:00.537 Total : 4968.57 19.41 0.00 0.00 25720.07 5898.24 51336.53 00:21:00.537 0 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1179513 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1179513 ']' 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1179513 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1179513 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1179513' 00:21:00.537 killing process with pid 1179513 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1179513 00:21:00.537 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.537 00:21:00.537 Latency(us) 00:21:00.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.537 =================================================================================================================== 00:21:00.537 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.537 [2024-06-11 09:35:30.204185] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1179513 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.lRIeTRrdM7 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lRIeTRrdM7 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lRIeTRrdM7 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lRIeTRrdM7 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lRIeTRrdM7' 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1181597 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1181597 /var/tmp/bdevperf.sock 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1181597 ']' 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.537 [2024-06-11 09:35:30.374012] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:21:00.537 [2024-06-11 09:35:30.374067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181597 ] 00:21:00.537 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.537 [2024-06-11 09:35:30.423742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.537 [2024-06-11 09:35:30.475076] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lRIeTRrdM7 00:21:00.537 [2024-06-11 09:35:30.743003] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.537 [2024-06-11 09:35:30.743046] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:00.537 [2024-06-11 09:35:30.743051] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.lRIeTRrdM7 00:21:00.537 request: 00:21:00.537 { 00:21:00.537 "name": "TLSTEST", 00:21:00.537 "trtype": "tcp", 00:21:00.537 "traddr": "10.0.0.2", 00:21:00.537 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.537 "adrfam": "ipv4", 00:21:00.537 "trsvcid": "4420", 00:21:00.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.537 "psk": "/tmp/tmp.lRIeTRrdM7", 00:21:00.537 "method": "bdev_nvme_attach_controller", 00:21:00.537 "req_id": 1 00:21:00.537 } 00:21:00.537 Got JSON-RPC error response 00:21:00.537 response: 00:21:00.537 { 00:21:00.537 "code": -1, 00:21:00.537 "message": "Operation not permitted" 00:21:00.537 } 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1181597 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1181597 ']' 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1181597 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1181597 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1181597' 00:21:00.537 killing process with pid 1181597 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1181597 00:21:00.537 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.537 00:21:00.537 Latency(us) 00:21:00.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.537 =================================================================================================================== 00:21:00.537 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1181597 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1179151 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1179151 ']' 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1179151 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1179151 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1179151' 00:21:00.537 killing process with pid 1179151 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1179151 00:21:00.537 [2024-06-11 09:35:30.987831] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:00.537 09:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1179151 00:21:00.537 09:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1181867 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1181867 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1181867 ']' 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.538 [2024-06-11 09:35:31.183687] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:21:00.538 [2024-06-11 09:35:31.183742] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.538 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.538 [2024-06-11 09:35:31.248251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.538 [2024-06-11 09:35:31.309296] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.538 [2024-06-11 09:35:31.309336] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.538 [2024-06-11 09:35:31.309344] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.538 [2024-06-11 09:35:31.309350] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.538 [2024-06-11 09:35:31.309356] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.538 [2024-06-11 09:35:31.309382] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.lRIeTRrdM7 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lRIeTRrdM7 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.lRIeTRrdM7 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lRIeTRrdM7 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:00.538 [2024-06-11 09:35:31.630604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:00.538 09:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:00.538 [2024-06-11 09:35:32.011573] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:00.538 [2024-06-11 09:35:32.011793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.538 09:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:00.538 malloc0 00:21:00.538 09:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lRIeTRrdM7 00:21:00.801 [2024-06-11 09:35:32.535483] tcp.c:3580:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:00.801 [2024-06-11 09:35:32.535508] tcp.c:3666:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:00.801 [2024-06-11 09:35:32.535535] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:00.801 request: 00:21:00.801 { 00:21:00.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.801 "host": "nqn.2016-06.io.spdk:host1", 00:21:00.801 "psk": "/tmp/tmp.lRIeTRrdM7", 00:21:00.801 "method": "nvmf_subsystem_add_host", 00:21:00.801 "req_id": 1 00:21:00.801 } 00:21:00.801 Got JSON-RPC error response 00:21:00.801 response: 00:21:00.801 { 00:21:00.801 "code": -32603, 00:21:00.801 "message": "Internal error" 00:21:00.801 } 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1181867 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1181867 ']' 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1181867 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1181867 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1181867' 00:21:00.801 killing process with pid 1181867 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1181867 00:21:00.801 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1181867 00:21:01.062 09:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.lRIeTRrdM7 00:21:01.062 09:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:01.062 09:35:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.062 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:01.062 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.062 09:35:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1182227 00:21:01.062 09:35:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1182227 00:21:01.062 09:35:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:01.062 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1182227 ']' 00:21:01.062 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.062 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:01.062 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.062 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:01.062 09:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.062 [2024-06-11 09:35:32.818469] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:21:01.062 [2024-06-11 09:35:32.818523] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.062 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.322 [2024-06-11 09:35:32.883347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.322 [2024-06-11 09:35:32.945470] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.322 [2024-06-11 09:35:32.945504] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.322 [2024-06-11 09:35:32.945511] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.322 [2024-06-11 09:35:32.945518] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.322 [2024-06-11 09:35:32.945523] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.322 [2024-06-11 09:35:32.945542] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.893 09:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:01.893 09:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:01.893 09:35:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.893 09:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:01.893 09:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.893 09:35:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.893 09:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.lRIeTRrdM7 00:21:01.893 09:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lRIeTRrdM7 00:21:01.893 09:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:02.154 [2024-06-11 09:35:33.840524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.154 09:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:02.415 09:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:02.415 [2024-06-11 09:35:34.229510] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.415 [2024-06-11 09:35:34.229710] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.675 09:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:02.675 malloc0 00:21:02.676 09:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:02.936 09:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lRIeTRrdM7 00:21:03.197 [2024-06-11 09:35:34.841909] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:03.197 09:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1182593 00:21:03.197 09:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:03.197 09:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:03.197 09:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1182593 /var/tmp/bdevperf.sock 00:21:03.197 09:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1182593 ']' 00:21:03.197 09:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.197 09:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:03.197 09:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.197 09:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:03.197 09:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.197 [2024-06-11 09:35:34.903142] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:21:03.197 [2024-06-11 09:35:34.903190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182593 ] 00:21:03.197 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.197 [2024-06-11 09:35:34.951733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.197 [2024-06-11 09:35:35.003783] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.457 09:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:03.457 09:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:03.457 09:35:35 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lRIeTRrdM7 00:21:03.457 [2024-06-11 09:35:35.271475] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.457 [2024-06-11 09:35:35.271529] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:03.717 TLSTESTn1 00:21:03.717 09:35:35 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:03.979 09:35:35 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:03.979 "subsystems": [ 00:21:03.979 { 00:21:03.979 "subsystem": "keyring", 00:21:03.979 "config": [] 00:21:03.979 }, 00:21:03.979 { 00:21:03.979 "subsystem": "iobuf", 00:21:03.979 "config": [ 00:21:03.979 { 00:21:03.979 "method": "iobuf_set_options", 00:21:03.979 "params": { 00:21:03.979 "small_pool_count": 8192, 00:21:03.979 "large_pool_count": 1024, 00:21:03.979 "small_bufsize": 8192, 00:21:03.979 "large_bufsize": 135168 00:21:03.979 } 00:21:03.979 } 00:21:03.979 ] 00:21:03.979 }, 00:21:03.979 { 00:21:03.979 "subsystem": "sock", 00:21:03.979 "config": [ 00:21:03.979 { 00:21:03.979 "method": "sock_set_default_impl", 00:21:03.979 "params": { 00:21:03.979 "impl_name": "posix" 00:21:03.979 } 00:21:03.979 }, 00:21:03.979 { 00:21:03.979 "method": "sock_impl_set_options", 00:21:03.979 "params": { 00:21:03.979 "impl_name": "ssl", 00:21:03.979 "recv_buf_size": 4096, 00:21:03.979 "send_buf_size": 4096, 00:21:03.979 "enable_recv_pipe": true, 00:21:03.979 "enable_quickack": false, 00:21:03.979 "enable_placement_id": 0, 00:21:03.979 "enable_zerocopy_send_server": true, 00:21:03.979 "enable_zerocopy_send_client": false, 00:21:03.979 "zerocopy_threshold": 0, 00:21:03.979 "tls_version": 0, 00:21:03.979 "enable_ktls": false 00:21:03.979 } 00:21:03.979 }, 00:21:03.979 { 00:21:03.979 "method": "sock_impl_set_options", 00:21:03.979 "params": { 00:21:03.979 "impl_name": "posix", 00:21:03.979 "recv_buf_size": 2097152, 00:21:03.979 "send_buf_size": 2097152, 00:21:03.979 "enable_recv_pipe": true, 00:21:03.979 "enable_quickack": false, 00:21:03.979 "enable_placement_id": 0, 00:21:03.979 "enable_zerocopy_send_server": true, 00:21:03.979 "enable_zerocopy_send_client": false, 00:21:03.979 "zerocopy_threshold": 0, 00:21:03.979 "tls_version": 0, 00:21:03.979 "enable_ktls": false 00:21:03.979 } 00:21:03.979 } 00:21:03.979 ] 00:21:03.979 }, 00:21:03.979 { 00:21:03.979 "subsystem": "vmd", 00:21:03.979 "config": [] 00:21:03.979 }, 00:21:03.979 { 00:21:03.979 "subsystem": "accel", 00:21:03.979 "config": [ 00:21:03.979 { 00:21:03.979 "method": "accel_set_options", 00:21:03.979 "params": { 00:21:03.979 "small_cache_size": 128, 00:21:03.979 "large_cache_size": 16, 00:21:03.979 "task_count": 2048, 00:21:03.979 "sequence_count": 2048, 00:21:03.979 "buf_count": 2048 00:21:03.979 } 00:21:03.979 } 00:21:03.979 ] 00:21:03.979 }, 00:21:03.979 { 00:21:03.979 "subsystem": "bdev", 00:21:03.979 "config": [ 00:21:03.979 { 00:21:03.979 "method": "bdev_set_options", 00:21:03.979 "params": { 00:21:03.979 "bdev_io_pool_size": 65535, 00:21:03.979 "bdev_io_cache_size": 256, 00:21:03.979 "bdev_auto_examine": true, 00:21:03.979 "iobuf_small_cache_size": 128, 00:21:03.979 "iobuf_large_cache_size": 16 00:21:03.979 } 00:21:03.979 }, 00:21:03.979 { 00:21:03.979 "method": "bdev_raid_set_options", 00:21:03.979 "params": { 00:21:03.979 "process_window_size_kb": 1024 00:21:03.979 } 00:21:03.979 }, 00:21:03.979 { 00:21:03.979 "method": "bdev_iscsi_set_options", 00:21:03.979 "params": { 00:21:03.979 "timeout_sec": 30 00:21:03.979 } 00:21:03.979 }, 00:21:03.979 { 00:21:03.979 "method": "bdev_nvme_set_options", 00:21:03.979 "params": { 00:21:03.979 "action_on_timeout": "none", 00:21:03.979 "timeout_us": 0, 00:21:03.979 "timeout_admin_us": 0, 00:21:03.979 "keep_alive_timeout_ms": 10000, 00:21:03.979 "arbitration_burst": 0, 00:21:03.979 "low_priority_weight": 0, 00:21:03.979 "medium_priority_weight": 0, 00:21:03.979 "high_priority_weight": 0, 00:21:03.979 "nvme_adminq_poll_period_us": 10000, 00:21:03.979 "nvme_ioq_poll_period_us": 0, 00:21:03.979 "io_queue_requests": 0, 00:21:03.979 "delay_cmd_submit": true, 00:21:03.979 "transport_retry_count": 4, 00:21:03.979 "bdev_retry_count": 3, 00:21:03.979 "transport_ack_timeout": 0, 00:21:03.979 "ctrlr_loss_timeout_sec": 0, 00:21:03.979 "reconnect_delay_sec": 0, 00:21:03.979 "fast_io_fail_timeout_sec": 0, 00:21:03.979 "disable_auto_failback": false, 00:21:03.979 "generate_uuids": false, 00:21:03.979 "transport_tos": 0, 00:21:03.979 "nvme_error_stat": false, 00:21:03.979 "rdma_srq_size": 0, 00:21:03.979 "io_path_stat": false, 00:21:03.979 "allow_accel_sequence": false, 00:21:03.979 "rdma_max_cq_size": 0, 00:21:03.979 "rdma_cm_event_timeout_ms": 0, 00:21:03.979 "dhchap_digests": [ 00:21:03.979 "sha256", 00:21:03.979 "sha384", 00:21:03.979 "sha512" 00:21:03.979 ], 00:21:03.979 "dhchap_dhgroups": [ 00:21:03.979 "null", 00:21:03.979 "ffdhe2048", 00:21:03.979 "ffdhe3072", 00:21:03.979 "ffdhe4096", 00:21:03.979 "ffdhe6144", 00:21:03.979 "ffdhe8192" 00:21:03.979 ] 00:21:03.979 } 00:21:03.979 }, 00:21:03.979 { 00:21:03.979 "method": "bdev_nvme_set_hotplug", 00:21:03.979 "params": { 00:21:03.979 "period_us": 100000, 00:21:03.979 "enable": false 00:21:03.979 } 00:21:03.979 }, 00:21:03.979 { 00:21:03.979 "method": "bdev_malloc_create", 00:21:03.979 "params": { 00:21:03.979 "name": "malloc0", 00:21:03.979 "num_blocks": 8192, 00:21:03.979 "block_size": 4096, 00:21:03.979 "physical_block_size": 4096, 00:21:03.979 "uuid": "39759c0a-eb26-4a85-bff0-b685b57c7aff", 00:21:03.979 "optimal_io_boundary": 0 00:21:03.979 } 00:21:03.979 }, 00:21:03.979 { 00:21:03.979 "method": "bdev_wait_for_examine" 00:21:03.979 } 00:21:03.979 ] 00:21:03.979 }, 00:21:03.979 { 00:21:03.980 "subsystem": "nbd", 00:21:03.980 "config": [] 00:21:03.980 }, 00:21:03.980 { 00:21:03.980 "subsystem": "scheduler", 00:21:03.980 "config": [ 00:21:03.980 { 00:21:03.980 "method": "framework_set_scheduler", 00:21:03.980 "params": { 00:21:03.980 "name": "static" 00:21:03.980 } 00:21:03.980 } 00:21:03.980 ] 00:21:03.980 }, 00:21:03.980 { 00:21:03.980 "subsystem": "nvmf", 00:21:03.980 "config": [ 00:21:03.980 { 00:21:03.980 "method": "nvmf_set_config", 00:21:03.980 "params": { 00:21:03.980 "discovery_filter": "match_any", 00:21:03.980 "admin_cmd_passthru": { 00:21:03.980 "identify_ctrlr": false 00:21:03.980 } 00:21:03.980 } 00:21:03.980 }, 00:21:03.980 { 00:21:03.980 "method": "nvmf_set_max_subsystems", 00:21:03.980 "params": { 00:21:03.980 "max_subsystems": 1024 00:21:03.980 } 00:21:03.980 }, 00:21:03.980 { 00:21:03.980 "method": "nvmf_set_crdt", 00:21:03.980 "params": { 00:21:03.980 "crdt1": 0, 00:21:03.980 "crdt2": 0, 00:21:03.980 "crdt3": 0 00:21:03.980 } 00:21:03.980 }, 00:21:03.980 { 00:21:03.980 "method": "nvmf_create_transport", 00:21:03.980 "params": { 00:21:03.980 "trtype": "TCP", 00:21:03.980 "max_queue_depth": 128, 00:21:03.980 "max_io_qpairs_per_ctrlr": 127, 00:21:03.980 "in_capsule_data_size": 4096, 00:21:03.980 "max_io_size": 131072, 00:21:03.980 "io_unit_size": 131072, 00:21:03.980 "max_aq_depth": 128, 00:21:03.980 "num_shared_buffers": 511, 00:21:03.980 "buf_cache_size": 4294967295, 00:21:03.980 "dif_insert_or_strip": false, 00:21:03.980 "zcopy": false, 00:21:03.980 "c2h_success": false, 00:21:03.980 "sock_priority": 0, 00:21:03.980 "abort_timeout_sec": 1, 00:21:03.980 "ack_timeout": 0, 00:21:03.980 "data_wr_pool_size": 0 00:21:03.980 } 00:21:03.980 }, 00:21:03.980 { 00:21:03.980 "method": "nvmf_create_subsystem", 00:21:03.980 "params": { 00:21:03.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.980 "allow_any_host": false, 00:21:03.980 "serial_number": "SPDK00000000000001", 00:21:03.980 "model_number": "SPDK bdev Controller", 00:21:03.980 "max_namespaces": 10, 00:21:03.980 "min_cntlid": 1, 00:21:03.980 "max_cntlid": 65519, 00:21:03.980 "ana_reporting": false 00:21:03.980 } 00:21:03.980 }, 00:21:03.980 { 00:21:03.980 "method": "nvmf_subsystem_add_host", 00:21:03.980 "params": { 00:21:03.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.980 "host": "nqn.2016-06.io.spdk:host1", 00:21:03.980 "psk": "/tmp/tmp.lRIeTRrdM7" 00:21:03.980 } 00:21:03.980 }, 00:21:03.980 { 00:21:03.980 "method": "nvmf_subsystem_add_ns", 00:21:03.980 "params": { 00:21:03.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.980 "namespace": { 00:21:03.980 "nsid": 1, 00:21:03.980 "bdev_name": "malloc0", 00:21:03.980 "nguid": "39759C0AEB264A85BFF0B685B57C7AFF", 00:21:03.980 "uuid": "39759c0a-eb26-4a85-bff0-b685b57c7aff", 00:21:03.980 "no_auto_visible": false 00:21:03.980 } 00:21:03.980 } 00:21:03.980 }, 00:21:03.980 { 00:21:03.980 "method": "nvmf_subsystem_add_listener", 00:21:03.980 "params": { 00:21:03.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.980 "listen_address": { 00:21:03.980 "trtype": "TCP", 00:21:03.980 "adrfam": "IPv4", 00:21:03.980 "traddr": "10.0.0.2", 00:21:03.980 "trsvcid": "4420" 00:21:03.980 }, 00:21:03.980 "secure_channel": true 00:21:03.980 } 00:21:03.980 } 00:21:03.980 ] 00:21:03.980 } 00:21:03.980 ] 00:21:03.980 }' 00:21:03.980 09:35:35 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:04.241 09:35:35 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:04.241 "subsystems": [ 00:21:04.241 { 00:21:04.241 "subsystem": "keyring", 00:21:04.241 "config": [] 00:21:04.241 }, 00:21:04.241 { 00:21:04.241 "subsystem": "iobuf", 00:21:04.241 "config": [ 00:21:04.241 { 00:21:04.241 "method": "iobuf_set_options", 00:21:04.241 "params": { 00:21:04.241 "small_pool_count": 8192, 00:21:04.241 "large_pool_count": 1024, 00:21:04.241 "small_bufsize": 8192, 00:21:04.241 "large_bufsize": 135168 00:21:04.241 } 00:21:04.241 } 00:21:04.241 ] 00:21:04.241 }, 00:21:04.241 { 00:21:04.241 "subsystem": "sock", 00:21:04.241 "config": [ 00:21:04.241 { 00:21:04.241 "method": "sock_set_default_impl", 00:21:04.241 "params": { 00:21:04.241 "impl_name": "posix" 00:21:04.241 } 00:21:04.241 }, 00:21:04.241 { 00:21:04.241 "method": "sock_impl_set_options", 00:21:04.241 "params": { 00:21:04.241 "impl_name": "ssl", 00:21:04.241 "recv_buf_size": 4096, 00:21:04.241 "send_buf_size": 4096, 00:21:04.241 "enable_recv_pipe": true, 00:21:04.241 "enable_quickack": false, 00:21:04.241 "enable_placement_id": 0, 00:21:04.241 "enable_zerocopy_send_server": true, 00:21:04.242 "enable_zerocopy_send_client": false, 00:21:04.242 "zerocopy_threshold": 0, 00:21:04.242 "tls_version": 0, 00:21:04.242 "enable_ktls": false 00:21:04.242 } 00:21:04.242 }, 00:21:04.242 { 00:21:04.242 "method": "sock_impl_set_options", 00:21:04.242 "params": { 00:21:04.242 "impl_name": "posix", 00:21:04.242 "recv_buf_size": 2097152, 00:21:04.242 "send_buf_size": 2097152, 00:21:04.242 "enable_recv_pipe": true, 00:21:04.242 "enable_quickack": false, 00:21:04.242 "enable_placement_id": 0, 00:21:04.242 "enable_zerocopy_send_server": true, 00:21:04.242 "enable_zerocopy_send_client": false, 00:21:04.242 "zerocopy_threshold": 0, 00:21:04.242 "tls_version": 0, 00:21:04.242 "enable_ktls": false 00:21:04.242 } 00:21:04.242 } 00:21:04.242 ] 00:21:04.242 }, 00:21:04.242 { 00:21:04.242 "subsystem": "vmd", 00:21:04.242 "config": [] 00:21:04.242 }, 00:21:04.242 { 00:21:04.242 "subsystem": "accel", 00:21:04.242 "config": [ 00:21:04.242 { 00:21:04.242 "method": "accel_set_options", 00:21:04.242 "params": { 00:21:04.242 "small_cache_size": 128, 00:21:04.242 "large_cache_size": 16, 00:21:04.242 "task_count": 2048, 00:21:04.242 "sequence_count": 2048, 00:21:04.242 "buf_count": 2048 00:21:04.242 } 00:21:04.242 } 00:21:04.242 ] 00:21:04.242 }, 00:21:04.242 { 00:21:04.242 "subsystem": "bdev", 00:21:04.242 "config": [ 00:21:04.242 { 00:21:04.242 "method": "bdev_set_options", 00:21:04.242 "params": { 00:21:04.242 "bdev_io_pool_size": 65535, 00:21:04.242 "bdev_io_cache_size": 256, 00:21:04.242 "bdev_auto_examine": true, 00:21:04.242 "iobuf_small_cache_size": 128, 00:21:04.242 "iobuf_large_cache_size": 16 00:21:04.242 } 00:21:04.242 }, 00:21:04.242 { 00:21:04.242 "method": "bdev_raid_set_options", 00:21:04.242 "params": { 00:21:04.242 "process_window_size_kb": 1024 00:21:04.242 } 00:21:04.242 }, 00:21:04.242 { 00:21:04.242 "method": "bdev_iscsi_set_options", 00:21:04.242 "params": { 00:21:04.242 "timeout_sec": 30 00:21:04.242 } 00:21:04.242 }, 00:21:04.242 { 00:21:04.242 "method": "bdev_nvme_set_options", 00:21:04.242 "params": { 00:21:04.242 "action_on_timeout": "none", 00:21:04.242 "timeout_us": 0, 00:21:04.242 "timeout_admin_us": 0, 00:21:04.242 "keep_alive_timeout_ms": 10000, 00:21:04.242 "arbitration_burst": 0, 00:21:04.242 "low_priority_weight": 0, 00:21:04.242 "medium_priority_weight": 0, 00:21:04.242 "high_priority_weight": 0, 00:21:04.242 "nvme_adminq_poll_period_us": 10000, 00:21:04.242 "nvme_ioq_poll_period_us": 0, 00:21:04.242 "io_queue_requests": 512, 00:21:04.242 "delay_cmd_submit": true, 00:21:04.242 "transport_retry_count": 4, 00:21:04.242 "bdev_retry_count": 3, 00:21:04.242 "transport_ack_timeout": 0, 00:21:04.242 "ctrlr_loss_timeout_sec": 0, 00:21:04.242 "reconnect_delay_sec": 0, 00:21:04.242 "fast_io_fail_timeout_sec": 0, 00:21:04.242 "disable_auto_failback": false, 00:21:04.242 "generate_uuids": false, 00:21:04.242 "transport_tos": 0, 00:21:04.242 "nvme_error_stat": false, 00:21:04.242 "rdma_srq_size": 0, 00:21:04.242 "io_path_stat": false, 00:21:04.242 "allow_accel_sequence": false, 00:21:04.242 "rdma_max_cq_size": 0, 00:21:04.242 "rdma_cm_event_timeout_ms": 0, 00:21:04.242 "dhchap_digests": [ 00:21:04.242 "sha256", 00:21:04.242 "sha384", 00:21:04.242 "sha512" 00:21:04.242 ], 00:21:04.242 "dhchap_dhgroups": [ 00:21:04.242 "null", 00:21:04.242 "ffdhe2048", 00:21:04.242 "ffdhe3072", 00:21:04.242 "ffdhe4096", 00:21:04.242 "ffdhe6144", 00:21:04.242 "ffdhe8192" 00:21:04.242 ] 00:21:04.242 } 00:21:04.242 }, 00:21:04.242 { 00:21:04.242 "method": "bdev_nvme_attach_controller", 00:21:04.242 "params": { 00:21:04.242 "name": "TLSTEST", 00:21:04.242 "trtype": "TCP", 00:21:04.242 "adrfam": "IPv4", 00:21:04.242 "traddr": "10.0.0.2", 00:21:04.242 "trsvcid": "4420", 00:21:04.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.242 "prchk_reftag": false, 00:21:04.242 "prchk_guard": false, 00:21:04.242 "ctrlr_loss_timeout_sec": 0, 00:21:04.242 "reconnect_delay_sec": 0, 00:21:04.242 "fast_io_fail_timeout_sec": 0, 00:21:04.242 "psk": "/tmp/tmp.lRIeTRrdM7", 00:21:04.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.242 "hdgst": false, 00:21:04.242 "ddgst": false 00:21:04.242 } 00:21:04.242 }, 00:21:04.242 { 00:21:04.242 "method": "bdev_nvme_set_hotplug", 00:21:04.242 "params": { 00:21:04.242 "period_us": 100000, 00:21:04.242 "enable": false 00:21:04.242 } 00:21:04.242 }, 00:21:04.242 { 00:21:04.242 "method": "bdev_wait_for_examine" 00:21:04.242 } 00:21:04.242 ] 00:21:04.242 }, 00:21:04.242 { 00:21:04.242 "subsystem": "nbd", 00:21:04.242 "config": [] 00:21:04.242 } 00:21:04.242 ] 00:21:04.242 }' 00:21:04.242 09:35:35 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1182593 00:21:04.242 09:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1182593 ']' 00:21:04.242 09:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1182593 00:21:04.242 09:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:04.242 09:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:04.242 09:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1182593 00:21:04.242 09:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:04.242 09:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:04.242 09:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1182593' 00:21:04.242 killing process with pid 1182593 00:21:04.242 09:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1182593 00:21:04.242 Received shutdown signal, test time was about 10.000000 seconds 00:21:04.242 00:21:04.242 Latency(us) 00:21:04.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.242 =================================================================================================================== 00:21:04.242 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:04.242 [2024-06-11 09:35:35.989479] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:04.242 09:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1182593 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1182227 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1182227 ']' 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1182227 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1182227 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1182227' 00:21:04.504 killing process with pid 1182227 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1182227 00:21:04.504 [2024-06-11 09:35:36.156642] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1182227 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.504 09:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:04.504 "subsystems": [ 00:21:04.504 { 00:21:04.504 "subsystem": "keyring", 00:21:04.504 "config": [] 00:21:04.504 }, 00:21:04.504 { 00:21:04.504 "subsystem": "iobuf", 00:21:04.504 "config": [ 00:21:04.504 { 00:21:04.504 "method": "iobuf_set_options", 00:21:04.504 "params": { 00:21:04.504 "small_pool_count": 8192, 00:21:04.504 "large_pool_count": 1024, 00:21:04.504 "small_bufsize": 8192, 00:21:04.504 "large_bufsize": 135168 00:21:04.504 } 00:21:04.504 } 00:21:04.504 ] 00:21:04.504 }, 00:21:04.504 { 00:21:04.504 "subsystem": "sock", 00:21:04.504 "config": [ 00:21:04.504 { 00:21:04.504 "method": "sock_set_default_impl", 00:21:04.504 "params": { 00:21:04.504 "impl_name": "posix" 00:21:04.504 } 00:21:04.504 }, 00:21:04.504 { 00:21:04.504 "method": "sock_impl_set_options", 00:21:04.504 "params": { 00:21:04.504 "impl_name": "ssl", 00:21:04.504 "recv_buf_size": 4096, 00:21:04.504 "send_buf_size": 4096, 00:21:04.504 "enable_recv_pipe": true, 00:21:04.504 "enable_quickack": false, 00:21:04.504 "enable_placement_id": 0, 00:21:04.504 "enable_zerocopy_send_server": true, 00:21:04.504 "enable_zerocopy_send_client": false, 00:21:04.504 "zerocopy_threshold": 0, 00:21:04.504 "tls_version": 0, 00:21:04.504 "enable_ktls": false 00:21:04.504 } 00:21:04.504 }, 00:21:04.504 { 00:21:04.504 "method": "sock_impl_set_options", 00:21:04.504 "params": { 00:21:04.504 "impl_name": "posix", 00:21:04.504 "recv_buf_size": 2097152, 00:21:04.504 "send_buf_size": 2097152, 00:21:04.504 "enable_recv_pipe": true, 00:21:04.504 "enable_quickack": false, 00:21:04.504 "enable_placement_id": 0, 00:21:04.504 "enable_zerocopy_send_server": true, 00:21:04.504 "enable_zerocopy_send_client": false, 00:21:04.504 "zerocopy_threshold": 0, 00:21:04.504 "tls_version": 0, 00:21:04.504 "enable_ktls": false 00:21:04.504 } 00:21:04.504 } 00:21:04.504 ] 00:21:04.504 }, 00:21:04.504 { 00:21:04.504 "subsystem": "vmd", 00:21:04.504 "config": [] 00:21:04.504 }, 00:21:04.504 { 00:21:04.504 "subsystem": "accel", 00:21:04.504 "config": [ 00:21:04.504 { 00:21:04.504 "method": "accel_set_options", 00:21:04.504 "params": { 00:21:04.504 "small_cache_size": 128, 00:21:04.504 "large_cache_size": 16, 00:21:04.504 "task_count": 2048, 00:21:04.504 "sequence_count": 2048, 00:21:04.504 "buf_count": 2048 00:21:04.504 } 00:21:04.504 } 00:21:04.504 ] 00:21:04.504 }, 00:21:04.504 { 00:21:04.504 "subsystem": "bdev", 00:21:04.504 "config": [ 00:21:04.504 { 00:21:04.504 "method": "bdev_set_options", 00:21:04.504 "params": { 00:21:04.504 "bdev_io_pool_size": 65535, 00:21:04.504 "bdev_io_cache_size": 256, 00:21:04.504 "bdev_auto_examine": true, 00:21:04.504 "iobuf_small_cache_size": 128, 00:21:04.504 "iobuf_large_cache_size": 16 00:21:04.504 } 00:21:04.504 }, 00:21:04.504 { 00:21:04.504 "method": "bdev_raid_set_options", 00:21:04.504 "params": { 00:21:04.504 "process_window_size_kb": 1024 00:21:04.504 } 00:21:04.504 }, 00:21:04.504 { 00:21:04.504 "method": "bdev_iscsi_set_options", 00:21:04.504 "params": { 00:21:04.504 "timeout_sec": 30 00:21:04.504 } 00:21:04.504 }, 00:21:04.504 { 00:21:04.504 "method": "bdev_nvme_set_options", 00:21:04.504 "params": { 00:21:04.504 "action_on_timeout": "none", 00:21:04.504 "timeout_us": 0, 00:21:04.504 "timeout_admin_us": 0, 00:21:04.504 "keep_alive_timeout_ms": 10000, 00:21:04.504 "arbitration_burst": 0, 00:21:04.504 "low_priority_weight": 0, 00:21:04.504 "medium_priority_weight": 0, 00:21:04.504 "high_priority_weight": 0, 00:21:04.504 "nvme_adminq_poll_period_us": 10000, 00:21:04.504 "nvme_ioq_poll_period_us": 0, 00:21:04.504 "io_queue_requests": 0, 00:21:04.504 "delay_cmd_submit": true, 00:21:04.504 "transport_retry_count": 4, 00:21:04.504 "bdev_retry_count": 3, 00:21:04.504 "transport_ack_timeout": 0, 00:21:04.504 "ctrlr_loss_timeout_sec": 0, 00:21:04.504 "reconnect_delay_sec": 0, 00:21:04.504 "fast_io_fail_timeout_sec": 0, 00:21:04.504 "disable_auto_failback": false, 00:21:04.504 "generate_uuids": false, 00:21:04.504 "transport_tos": 0, 00:21:04.504 "nvme_error_stat": false, 00:21:04.504 "rdma_srq_size": 0, 00:21:04.504 "io_path_stat": false, 00:21:04.504 "allow_accel_sequence": false, 00:21:04.504 "rdma_max_cq_size": 0, 00:21:04.504 "rdma_cm_event_timeout_ms": 0, 00:21:04.504 "dhchap_digests": [ 00:21:04.504 "sha256", 00:21:04.504 "sha384", 00:21:04.504 "sha512" 00:21:04.504 ], 00:21:04.504 "dhchap_dhgroups": [ 00:21:04.504 "null", 00:21:04.504 "ffdhe2048", 00:21:04.504 "ffdhe3072", 00:21:04.504 "ffdhe4096", 00:21:04.504 "ffdhe6144", 00:21:04.504 "ffdhe8192" 00:21:04.504 ] 00:21:04.504 } 00:21:04.504 }, 00:21:04.504 { 00:21:04.504 "method": "bdev_nvme_set_hotplug", 00:21:04.504 "params": { 00:21:04.504 "period_us": 100000, 00:21:04.504 "enable": false 00:21:04.504 } 00:21:04.504 }, 00:21:04.504 { 00:21:04.504 "method": "bdev_malloc_create", 00:21:04.504 "params": { 00:21:04.504 "name": "malloc0", 00:21:04.504 "num_blocks": 8192, 00:21:04.504 "block_size": 4096, 00:21:04.504 "physical_block_size": 4096, 00:21:04.504 "uuid": "39759c0a-eb26-4a85-bff0-b685b57c7aff", 00:21:04.504 "optimal_io_boundary": 0 00:21:04.504 } 00:21:04.504 }, 00:21:04.504 { 00:21:04.504 "method": "bdev_wait_for_examine" 00:21:04.504 } 00:21:04.505 ] 00:21:04.505 }, 00:21:04.505 { 00:21:04.505 "subsystem": "nbd", 00:21:04.505 "config": [] 00:21:04.505 }, 00:21:04.505 { 00:21:04.505 "subsystem": "scheduler", 00:21:04.505 "config": [ 00:21:04.505 { 00:21:04.505 "method": "framework_set_scheduler", 00:21:04.505 "params": { 00:21:04.505 "name": "static" 00:21:04.505 } 00:21:04.505 } 00:21:04.505 ] 00:21:04.505 }, 00:21:04.505 { 00:21:04.505 "subsystem": "nvmf", 00:21:04.505 "config": [ 00:21:04.505 { 00:21:04.505 "method": "nvmf_set_config", 00:21:04.505 "params": { 00:21:04.505 "discovery_filter": "match_any", 00:21:04.505 "admin_cmd_passthru": { 00:21:04.505 "identify_ctrlr": false 00:21:04.505 } 00:21:04.505 } 00:21:04.505 }, 00:21:04.505 { 00:21:04.505 "method": "nvmf_set_max_subsystems", 00:21:04.505 "params": { 00:21:04.505 "max_subsystems": 1024 00:21:04.505 } 00:21:04.505 }, 00:21:04.505 { 00:21:04.505 "method": "nvmf_set_crdt", 00:21:04.505 "params": { 00:21:04.505 "crdt1": 0, 00:21:04.505 "crdt2": 0, 00:21:04.505 "crdt3": 0 00:21:04.505 } 00:21:04.505 }, 00:21:04.505 { 00:21:04.505 "method": "nvmf_create_transport", 00:21:04.505 "params": { 00:21:04.505 "trtype": "TCP", 00:21:04.505 "max_queue_depth": 128, 00:21:04.505 "max_io_qpairs_per_ctrlr": 127, 00:21:04.505 "in_capsule_data_size": 4096, 00:21:04.505 "max_io_size": 131072, 00:21:04.505 "io_unit_size": 131072, 00:21:04.505 "max_aq_depth": 128, 00:21:04.505 "num_shared_buffers": 511, 00:21:04.505 "buf_cache_size": 4294967295, 00:21:04.505 "dif_insert_or_strip": false, 00:21:04.505 "zcopy": false, 00:21:04.505 "c2h_success": false, 00:21:04.505 "sock_priority": 0, 00:21:04.505 "abort_timeout_sec": 1, 00:21:04.505 "ack_timeout": 0, 00:21:04.505 "data_wr_pool_size": 0 00:21:04.505 } 00:21:04.505 }, 00:21:04.505 { 00:21:04.505 "method": "nvmf_create_subsystem", 00:21:04.505 "params": { 00:21:04.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.505 "allow_any_host": false, 00:21:04.505 "serial_number": "SPDK00000000000001", 00:21:04.505 "model_number": "SPDK bdev Controller", 00:21:04.505 "max_namespaces": 10, 00:21:04.505 "min_cntlid": 1, 00:21:04.505 "max_cntlid": 65519, 00:21:04.505 "ana_reporting": false 00:21:04.505 } 00:21:04.505 }, 00:21:04.505 { 00:21:04.505 "method": "nvmf_subsystem_add_host", 00:21:04.505 "params": { 00:21:04.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.505 "host": "nqn.2016-06.io.spdk:host1", 00:21:04.505 "psk": "/tmp/tmp.lRIeTRrdM7" 00:21:04.505 } 00:21:04.505 }, 00:21:04.505 { 00:21:04.505 "method": "nvmf_subsystem_add_ns", 00:21:04.505 "params": { 00:21:04.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.505 "namespace": { 00:21:04.505 "nsid": 1, 00:21:04.505 "bdev_name": "malloc0", 00:21:04.505 "nguid": "39759C0AEB264A85BFF0B685B57C7AFF", 00:21:04.505 "uuid": "39759c0a-eb26-4a85-bff0-b685b57c7aff", 00:21:04.505 "no_auto_visible": false 00:21:04.505 } 00:21:04.505 } 00:21:04.505 }, 00:21:04.505 { 00:21:04.505 "method": "nvmf_subsystem_add_listener", 00:21:04.505 "params": { 00:21:04.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.505 "listen_address": { 00:21:04.505 "trtype": "TCP", 00:21:04.505 "adrfam": "IPv4", 00:21:04.505 "traddr": "10.0.0.2", 00:21:04.505 "trsvcid": "4420" 00:21:04.505 }, 00:21:04.505 "secure_channel": true 00:21:04.505 } 00:21:04.505 } 00:21:04.505 ] 00:21:04.505 } 00:21:04.505 ] 00:21:04.505 }' 00:21:04.505 09:35:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1182949 00:21:04.505 09:35:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1182949 00:21:04.505 09:35:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:04.505 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1182949 ']' 00:21:04.505 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.505 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:04.505 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.505 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:04.505 09:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.765 [2024-06-11 09:35:36.355827] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:21:04.765 [2024-06-11 09:35:36.355895] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.765 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.766 [2024-06-11 09:35:36.423898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.766 [2024-06-11 09:35:36.490033] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.766 [2024-06-11 09:35:36.490067] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.766 [2024-06-11 09:35:36.490074] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.766 [2024-06-11 09:35:36.490085] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.766 [2024-06-11 09:35:36.490090] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.766 [2024-06-11 09:35:36.490141] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.026 [2024-06-11 09:35:36.679104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.026 [2024-06-11 09:35:36.695046] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:05.026 [2024-06-11 09:35:36.711100] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.027 [2024-06-11 09:35:36.724624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1183039 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1183039 /var/tmp/bdevperf.sock 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1183039 ']' 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.598 09:35:37 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:05.598 "subsystems": [ 00:21:05.598 { 00:21:05.598 "subsystem": "keyring", 00:21:05.598 "config": [] 00:21:05.598 }, 00:21:05.598 { 00:21:05.598 "subsystem": "iobuf", 00:21:05.598 "config": [ 00:21:05.598 { 00:21:05.598 "method": "iobuf_set_options", 00:21:05.598 "params": { 00:21:05.598 "small_pool_count": 8192, 00:21:05.598 "large_pool_count": 1024, 00:21:05.598 "small_bufsize": 8192, 00:21:05.598 "large_bufsize": 135168 00:21:05.598 } 00:21:05.598 } 00:21:05.598 ] 00:21:05.598 }, 00:21:05.598 { 00:21:05.598 "subsystem": "sock", 00:21:05.598 "config": [ 00:21:05.598 { 00:21:05.598 "method": "sock_set_default_impl", 00:21:05.598 "params": { 00:21:05.598 "impl_name": "posix" 00:21:05.598 } 00:21:05.598 }, 00:21:05.598 { 00:21:05.598 "method": "sock_impl_set_options", 00:21:05.598 "params": { 00:21:05.598 "impl_name": "ssl", 00:21:05.598 "recv_buf_size": 4096, 00:21:05.598 "send_buf_size": 4096, 00:21:05.598 "enable_recv_pipe": true, 00:21:05.598 "enable_quickack": false, 00:21:05.598 "enable_placement_id": 0, 00:21:05.598 "enable_zerocopy_send_server": true, 00:21:05.598 "enable_zerocopy_send_client": false, 00:21:05.598 "zerocopy_threshold": 0, 00:21:05.598 "tls_version": 0, 00:21:05.598 "enable_ktls": false 00:21:05.598 } 00:21:05.598 }, 00:21:05.598 { 00:21:05.598 "method": "sock_impl_set_options", 00:21:05.598 "params": { 00:21:05.598 "impl_name": "posix", 00:21:05.598 "recv_buf_size": 2097152, 00:21:05.598 "send_buf_size": 2097152, 00:21:05.598 "enable_recv_pipe": true, 00:21:05.598 "enable_quickack": false, 00:21:05.598 "enable_placement_id": 0, 00:21:05.598 "enable_zerocopy_send_server": true, 00:21:05.598 "enable_zerocopy_send_client": false, 00:21:05.598 "zerocopy_threshold": 0, 00:21:05.598 "tls_version": 0, 00:21:05.598 "enable_ktls": false 00:21:05.598 } 00:21:05.598 } 00:21:05.598 ] 00:21:05.598 }, 00:21:05.598 { 00:21:05.598 "subsystem": "vmd", 00:21:05.598 "config": [] 00:21:05.598 }, 00:21:05.598 { 00:21:05.598 "subsystem": "accel", 00:21:05.598 "config": [ 00:21:05.598 { 00:21:05.598 "method": "accel_set_options", 00:21:05.598 "params": { 00:21:05.598 "small_cache_size": 128, 00:21:05.598 "large_cache_size": 16, 00:21:05.598 "task_count": 2048, 00:21:05.598 "sequence_count": 2048, 00:21:05.598 "buf_count": 2048 00:21:05.598 } 00:21:05.598 } 00:21:05.598 ] 00:21:05.598 }, 00:21:05.598 { 00:21:05.598 "subsystem": "bdev", 00:21:05.598 "config": [ 00:21:05.598 { 00:21:05.598 "method": "bdev_set_options", 00:21:05.598 "params": { 00:21:05.598 "bdev_io_pool_size": 65535, 00:21:05.598 "bdev_io_cache_size": 256, 00:21:05.598 "bdev_auto_examine": true, 00:21:05.598 "iobuf_small_cache_size": 128, 00:21:05.598 "iobuf_large_cache_size": 16 00:21:05.598 } 00:21:05.598 }, 00:21:05.598 { 00:21:05.598 "method": "bdev_raid_set_options", 00:21:05.598 "params": { 00:21:05.598 "process_window_size_kb": 1024 00:21:05.598 } 00:21:05.598 }, 00:21:05.598 { 00:21:05.598 "method": "bdev_iscsi_set_options", 00:21:05.598 "params": { 00:21:05.598 "timeout_sec": 30 00:21:05.598 } 00:21:05.598 }, 00:21:05.598 { 00:21:05.598 "method": "bdev_nvme_set_options", 00:21:05.598 "params": { 00:21:05.598 "action_on_timeout": "none", 00:21:05.598 "timeout_us": 0, 00:21:05.598 "timeout_admin_us": 0, 00:21:05.598 "keep_alive_timeout_ms": 10000, 00:21:05.598 "arbitration_burst": 0, 00:21:05.598 "low_priority_weight": 0, 00:21:05.598 "medium_priority_weight": 0, 00:21:05.598 "high_priority_weight": 0, 00:21:05.598 "nvme_adminq_poll_period_us": 10000, 00:21:05.598 "nvme_ioq_poll_period_us": 0, 00:21:05.598 "io_queue_requests": 512, 00:21:05.598 "delay_cmd_submit": true, 00:21:05.598 "transport_retry_count": 4, 00:21:05.598 "bdev_retry_count": 3, 00:21:05.598 "transport_ack_timeout": 0, 00:21:05.598 "ctrlr_loss_timeout_sec": 0, 00:21:05.598 "reconnect_delay_sec": 0, 00:21:05.598 "fast_io_fail_timeout_sec": 0, 00:21:05.598 "disable_auto_failback": false, 00:21:05.598 "generate_uuids": false, 00:21:05.598 "transport_tos": 0, 00:21:05.598 "nvme_error_stat": false, 00:21:05.598 "rdma_srq_size": 0, 00:21:05.598 "io_path_stat": false, 00:21:05.598 "allow_accel_sequence": false, 00:21:05.598 "rdma_max_cq_size": 0, 00:21:05.598 "rdma_cm_event_timeout_ms": 0, 00:21:05.598 "dhchap_digests": [ 00:21:05.598 "sha256", 00:21:05.598 "sha384", 00:21:05.598 "sha512" 00:21:05.598 ], 00:21:05.598 "dhchap_dhgroups": [ 00:21:05.598 "null", 00:21:05.598 "ffdhe2048", 00:21:05.598 "ffdhe3072", 00:21:05.598 "ffdhe4096", 00:21:05.599 "ffdhe6144", 00:21:05.599 "ffdhe8192" 00:21:05.599 ] 00:21:05.599 } 00:21:05.599 }, 00:21:05.599 { 00:21:05.599 "method": "bdev_nvme_attach_controller", 00:21:05.599 "params": { 00:21:05.599 "name": "TLSTEST", 00:21:05.599 "trtype": "TCP", 00:21:05.599 "adrfam": "IPv4", 00:21:05.599 "traddr": "10.0.0.2", 00:21:05.599 "trsvcid": "4420", 00:21:05.599 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.599 "prchk_reftag": false, 00:21:05.599 "prchk_guard": false, 00:21:05.599 "ctrlr_loss_timeout_sec": 0, 00:21:05.599 "reconnect_delay_sec": 0, 00:21:05.599 "fast_io_fail_timeout_sec": 0, 00:21:05.599 "psk": "/tmp/tmp.lRIeTRrdM7", 00:21:05.599 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.599 "hdgst": false, 00:21:05.599 "ddgst": false 00:21:05.599 } 00:21:05.599 }, 00:21:05.599 { 00:21:05.599 "method": "bdev_nvme_set_hotplug", 00:21:05.599 "params": { 00:21:05.599 "period_us": 100000, 00:21:05.599 "enable": false 00:21:05.599 } 00:21:05.599 }, 00:21:05.599 { 00:21:05.599 "method": "bdev_wait_for_examine" 00:21:05.599 } 00:21:05.599 ] 00:21:05.599 }, 00:21:05.599 { 00:21:05.599 "subsystem": "nbd", 00:21:05.599 "config": [] 00:21:05.599 } 00:21:05.599 ] 00:21:05.599 }' 00:21:05.599 [2024-06-11 09:35:37.299430] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:21:05.599 [2024-06-11 09:35:37.299479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183039 ] 00:21:05.599 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.599 [2024-06-11 09:35:37.348045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.599 [2024-06-11 09:35:37.400392] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.859 [2024-06-11 09:35:37.525265] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.859 [2024-06-11 09:35:37.525332] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:06.429 09:35:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:06.429 09:35:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:06.429 09:35:38 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:06.430 Running I/O for 10 seconds... 00:21:18.698 00:21:18.698 Latency(us) 00:21:18.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.698 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:18.698 Verification LBA range: start 0x0 length 0x2000 00:21:18.698 TLSTESTn1 : 10.07 1586.76 6.20 0.00 0.00 80493.21 5461.33 112721.92 00:21:18.698 =================================================================================================================== 00:21:18.698 Total : 1586.76 6.20 0.00 0.00 80493.21 5461.33 112721.92 00:21:18.698 0 00:21:18.698 09:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.698 09:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1183039 00:21:18.698 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1183039 ']' 00:21:18.698 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1183039 00:21:18.698 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:18.698 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:18.698 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1183039 00:21:18.698 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:18.698 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:18.698 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1183039' 00:21:18.698 killing process with pid 1183039 00:21:18.698 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1183039 00:21:18.698 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.698 00:21:18.698 Latency(us) 00:21:18.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.698 =================================================================================================================== 00:21:18.698 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.698 [2024-06-11 09:35:48.415872] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1183039 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1182949 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1182949 ']' 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1182949 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1182949 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1182949' 00:21:18.699 killing process with pid 1182949 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1182949 00:21:18.699 [2024-06-11 09:35:48.581284] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1182949 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1185319 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1185319 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1185319 ']' 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:18.699 09:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.699 [2024-06-11 09:35:48.777521] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:21:18.699 [2024-06-11 09:35:48.777573] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.699 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.699 [2024-06-11 09:35:48.860347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.699 [2024-06-11 09:35:48.930710] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.699 [2024-06-11 09:35:48.930762] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.699 [2024-06-11 09:35:48.930769] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.699 [2024-06-11 09:35:48.930777] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.699 [2024-06-11 09:35:48.930787] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.699 [2024-06-11 09:35:48.930812] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.699 09:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:18.699 09:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:18.699 09:35:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:18.699 09:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:18.699 09:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.699 09:35:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.699 09:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.lRIeTRrdM7 00:21:18.699 09:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lRIeTRrdM7 00:21:18.699 09:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:18.699 [2024-06-11 09:35:49.891110] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.699 09:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:18.699 09:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:18.699 [2024-06-11 09:35:50.344261] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.699 [2024-06-11 09:35:50.344576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.699 09:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:18.959 malloc0 00:21:18.959 09:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:19.220 09:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lRIeTRrdM7 00:21:19.220 [2024-06-11 09:35:51.000325] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:19.220 09:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1185691 00:21:19.220 09:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:19.220 09:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:19.220 09:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1185691 /var/tmp/bdevperf.sock 00:21:19.220 09:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1185691 ']' 00:21:19.220 09:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.220 09:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:19.220 09:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.220 09:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:19.220 09:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.481 [2024-06-11 09:35:51.074625] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:21:19.481 [2024-06-11 09:35:51.074695] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185691 ] 00:21:19.481 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.481 [2024-06-11 09:35:51.141102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.481 [2024-06-11 09:35:51.214381] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.481 09:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:19.481 09:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:19.481 09:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lRIeTRrdM7 00:21:19.741 09:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:20.000 [2024-06-11 09:35:51.676789] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.000 nvme0n1 00:21:20.000 09:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:20.260 Running I/O for 1 seconds... 00:21:21.200 00:21:21.200 Latency(us) 00:21:21.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.200 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:21.200 Verification LBA range: start 0x0 length 0x2000 00:21:21.200 nvme0n1 : 1.04 1293.40 5.05 0.00 0.00 97788.57 6253.23 156412.59 00:21:21.200 =================================================================================================================== 00:21:21.200 Total : 1293.40 5.05 0.00 0.00 97788.57 6253.23 156412.59 00:21:21.200 0 00:21:21.200 09:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1185691 00:21:21.200 09:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1185691 ']' 00:21:21.200 09:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1185691 00:21:21.200 09:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:21.200 09:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:21.200 09:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1185691 00:21:21.200 09:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:21.200 09:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:21.200 09:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1185691' 00:21:21.200 killing process with pid 1185691 00:21:21.200 09:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1185691 00:21:21.200 Received shutdown signal, test time was about 1.000000 seconds 00:21:21.200 00:21:21.200 Latency(us) 00:21:21.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.200 =================================================================================================================== 00:21:21.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.200 09:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1185691 00:21:21.460 09:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1185319 00:21:21.460 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1185319 ']' 00:21:21.460 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1185319 00:21:21.460 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:21.460 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:21.460 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1185319 00:21:21.460 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:21.460 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:21.460 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1185319' 00:21:21.460 killing process with pid 1185319 00:21:21.460 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1185319 00:21:21.460 [2024-06-11 09:35:53.189555] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:21.460 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1185319 00:21:21.721 09:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:21.721 09:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.721 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:21.721 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.721 09:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1186294 00:21:21.721 09:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1186294 00:21:21.721 09:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:21.721 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1186294 ']' 00:21:21.721 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.721 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:21.721 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.721 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:21.721 09:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.721 [2024-06-11 09:35:53.404657] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:21:21.721 [2024-06-11 09:35:53.404724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.721 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.721 [2024-06-11 09:35:53.490238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.982 [2024-06-11 09:35:53.584589] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.982 [2024-06-11 09:35:53.584645] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.982 [2024-06-11 09:35:53.584655] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.982 [2024-06-11 09:35:53.584661] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.982 [2024-06-11 09:35:53.584667] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.982 [2024-06-11 09:35:53.584693] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.554 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:22.554 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:22.554 09:35:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:22.554 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:22.554 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.554 09:35:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.554 09:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:22.554 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:22.554 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.554 [2024-06-11 09:35:54.328801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.554 malloc0 00:21:22.554 [2024-06-11 09:35:54.358946] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.554 [2024-06-11 09:35:54.359249] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.816 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.816 09:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1186386 00:21:22.816 09:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1186386 /var/tmp/bdevperf.sock 00:21:22.816 09:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:22.816 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1186386 ']' 00:21:22.816 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.816 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:22.816 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.816 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:22.816 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.816 [2024-06-11 09:35:54.439448] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:21:22.816 [2024-06-11 09:35:54.439507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186386 ] 00:21:22.816 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.816 [2024-06-11 09:35:54.505093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.816 [2024-06-11 09:35:54.579129] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.076 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:23.076 09:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:23.076 09:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lRIeTRrdM7 00:21:23.076 09:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:23.337 [2024-06-11 09:35:55.045494] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:23.337 nvme0n1 00:21:23.337 09:35:55 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:23.609 Running I/O for 1 seconds... 00:21:24.553 00:21:24.553 Latency(us) 00:21:24.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.553 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:24.553 Verification LBA range: start 0x0 length 0x2000 00:21:24.553 nvme0n1 : 1.02 4691.01 18.32 0.00 0.00 27023.41 6853.97 48933.55 00:21:24.553 =================================================================================================================== 00:21:24.553 Total : 4691.01 18.32 0.00 0.00 27023.41 6853.97 48933.55 00:21:24.553 0 00:21:24.553 09:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:24.553 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.553 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.814 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.814 09:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:24.814 "subsystems": [ 00:21:24.814 { 00:21:24.814 "subsystem": "keyring", 00:21:24.814 "config": [ 00:21:24.814 { 00:21:24.814 "method": "keyring_file_add_key", 00:21:24.814 "params": { 00:21:24.814 "name": "key0", 00:21:24.814 "path": "/tmp/tmp.lRIeTRrdM7" 00:21:24.814 } 00:21:24.814 } 00:21:24.814 ] 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "subsystem": "iobuf", 00:21:24.814 "config": [ 00:21:24.814 { 00:21:24.814 "method": "iobuf_set_options", 00:21:24.814 "params": { 00:21:24.814 "small_pool_count": 8192, 00:21:24.814 "large_pool_count": 1024, 00:21:24.814 "small_bufsize": 8192, 00:21:24.814 "large_bufsize": 135168 00:21:24.814 } 00:21:24.814 } 00:21:24.814 ] 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "subsystem": "sock", 00:21:24.814 "config": [ 00:21:24.814 { 00:21:24.814 "method": "sock_set_default_impl", 00:21:24.814 "params": { 00:21:24.814 "impl_name": "posix" 00:21:24.814 } 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "method": "sock_impl_set_options", 00:21:24.814 "params": { 00:21:24.814 "impl_name": "ssl", 00:21:24.814 "recv_buf_size": 4096, 00:21:24.814 "send_buf_size": 4096, 00:21:24.814 "enable_recv_pipe": true, 00:21:24.814 "enable_quickack": false, 00:21:24.814 "enable_placement_id": 0, 00:21:24.814 "enable_zerocopy_send_server": true, 00:21:24.814 "enable_zerocopy_send_client": false, 00:21:24.814 "zerocopy_threshold": 0, 00:21:24.814 "tls_version": 0, 00:21:24.814 "enable_ktls": false 00:21:24.814 } 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "method": "sock_impl_set_options", 00:21:24.814 "params": { 00:21:24.814 "impl_name": "posix", 00:21:24.814 "recv_buf_size": 2097152, 00:21:24.814 "send_buf_size": 2097152, 00:21:24.814 "enable_recv_pipe": true, 00:21:24.814 "enable_quickack": false, 00:21:24.814 "enable_placement_id": 0, 00:21:24.814 "enable_zerocopy_send_server": true, 00:21:24.814 "enable_zerocopy_send_client": false, 00:21:24.814 "zerocopy_threshold": 0, 00:21:24.814 "tls_version": 0, 00:21:24.814 "enable_ktls": false 00:21:24.814 } 00:21:24.814 } 00:21:24.814 ] 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "subsystem": "vmd", 00:21:24.814 "config": [] 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "subsystem": "accel", 00:21:24.814 "config": [ 00:21:24.814 { 00:21:24.814 "method": "accel_set_options", 00:21:24.814 "params": { 00:21:24.814 "small_cache_size": 128, 00:21:24.814 "large_cache_size": 16, 00:21:24.814 "task_count": 2048, 00:21:24.814 "sequence_count": 2048, 00:21:24.814 "buf_count": 2048 00:21:24.814 } 00:21:24.814 } 00:21:24.814 ] 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "subsystem": "bdev", 00:21:24.814 "config": [ 00:21:24.814 { 00:21:24.814 "method": "bdev_set_options", 00:21:24.814 "params": { 00:21:24.814 "bdev_io_pool_size": 65535, 00:21:24.814 "bdev_io_cache_size": 256, 00:21:24.814 "bdev_auto_examine": true, 00:21:24.814 "iobuf_small_cache_size": 128, 00:21:24.814 "iobuf_large_cache_size": 16 00:21:24.814 } 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "method": "bdev_raid_set_options", 00:21:24.814 "params": { 00:21:24.814 "process_window_size_kb": 1024 00:21:24.814 } 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "method": "bdev_iscsi_set_options", 00:21:24.814 "params": { 00:21:24.814 "timeout_sec": 30 00:21:24.814 } 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "method": "bdev_nvme_set_options", 00:21:24.814 "params": { 00:21:24.814 "action_on_timeout": "none", 00:21:24.814 "timeout_us": 0, 00:21:24.814 "timeout_admin_us": 0, 00:21:24.814 "keep_alive_timeout_ms": 10000, 00:21:24.814 "arbitration_burst": 0, 00:21:24.814 "low_priority_weight": 0, 00:21:24.814 "medium_priority_weight": 0, 00:21:24.814 "high_priority_weight": 0, 00:21:24.814 "nvme_adminq_poll_period_us": 10000, 00:21:24.814 "nvme_ioq_poll_period_us": 0, 00:21:24.814 "io_queue_requests": 0, 00:21:24.814 "delay_cmd_submit": true, 00:21:24.814 "transport_retry_count": 4, 00:21:24.814 "bdev_retry_count": 3, 00:21:24.814 "transport_ack_timeout": 0, 00:21:24.814 "ctrlr_loss_timeout_sec": 0, 00:21:24.814 "reconnect_delay_sec": 0, 00:21:24.814 "fast_io_fail_timeout_sec": 0, 00:21:24.814 "disable_auto_failback": false, 00:21:24.814 "generate_uuids": false, 00:21:24.814 "transport_tos": 0, 00:21:24.814 "nvme_error_stat": false, 00:21:24.814 "rdma_srq_size": 0, 00:21:24.814 "io_path_stat": false, 00:21:24.814 "allow_accel_sequence": false, 00:21:24.814 "rdma_max_cq_size": 0, 00:21:24.814 "rdma_cm_event_timeout_ms": 0, 00:21:24.814 "dhchap_digests": [ 00:21:24.814 "sha256", 00:21:24.814 "sha384", 00:21:24.814 "sha512" 00:21:24.814 ], 00:21:24.814 "dhchap_dhgroups": [ 00:21:24.814 "null", 00:21:24.814 "ffdhe2048", 00:21:24.814 "ffdhe3072", 00:21:24.814 "ffdhe4096", 00:21:24.814 "ffdhe6144", 00:21:24.814 "ffdhe8192" 00:21:24.814 ] 00:21:24.814 } 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "method": "bdev_nvme_set_hotplug", 00:21:24.814 "params": { 00:21:24.814 "period_us": 100000, 00:21:24.814 "enable": false 00:21:24.814 } 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "method": "bdev_malloc_create", 00:21:24.814 "params": { 00:21:24.814 "name": "malloc0", 00:21:24.814 "num_blocks": 8192, 00:21:24.814 "block_size": 4096, 00:21:24.814 "physical_block_size": 4096, 00:21:24.814 "uuid": "0aba9423-017b-4531-89a8-6bfafce7467e", 00:21:24.814 "optimal_io_boundary": 0 00:21:24.814 } 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "method": "bdev_wait_for_examine" 00:21:24.814 } 00:21:24.814 ] 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "subsystem": "nbd", 00:21:24.814 "config": [] 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "subsystem": "scheduler", 00:21:24.814 "config": [ 00:21:24.814 { 00:21:24.814 "method": "framework_set_scheduler", 00:21:24.814 "params": { 00:21:24.814 "name": "static" 00:21:24.814 } 00:21:24.814 } 00:21:24.814 ] 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "subsystem": "nvmf", 00:21:24.814 "config": [ 00:21:24.814 { 00:21:24.814 "method": "nvmf_set_config", 00:21:24.814 "params": { 00:21:24.814 "discovery_filter": "match_any", 00:21:24.814 "admin_cmd_passthru": { 00:21:24.814 "identify_ctrlr": false 00:21:24.814 } 00:21:24.814 } 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "method": "nvmf_set_max_subsystems", 00:21:24.814 "params": { 00:21:24.814 "max_subsystems": 1024 00:21:24.814 } 00:21:24.814 }, 00:21:24.814 { 00:21:24.814 "method": "nvmf_set_crdt", 00:21:24.814 "params": { 00:21:24.815 "crdt1": 0, 00:21:24.815 "crdt2": 0, 00:21:24.815 "crdt3": 0 00:21:24.815 } 00:21:24.815 }, 00:21:24.815 { 00:21:24.815 "method": "nvmf_create_transport", 00:21:24.815 "params": { 00:21:24.815 "trtype": "TCP", 00:21:24.815 "max_queue_depth": 128, 00:21:24.815 "max_io_qpairs_per_ctrlr": 127, 00:21:24.815 "in_capsule_data_size": 4096, 00:21:24.815 "max_io_size": 131072, 00:21:24.815 "io_unit_size": 131072, 00:21:24.815 "max_aq_depth": 128, 00:21:24.815 "num_shared_buffers": 511, 00:21:24.815 "buf_cache_size": 4294967295, 00:21:24.815 "dif_insert_or_strip": false, 00:21:24.815 "zcopy": false, 00:21:24.815 "c2h_success": false, 00:21:24.815 "sock_priority": 0, 00:21:24.815 "abort_timeout_sec": 1, 00:21:24.815 "ack_timeout": 0, 00:21:24.815 "data_wr_pool_size": 0 00:21:24.815 } 00:21:24.815 }, 00:21:24.815 { 00:21:24.815 "method": "nvmf_create_subsystem", 00:21:24.815 "params": { 00:21:24.815 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.815 "allow_any_host": false, 00:21:24.815 "serial_number": "00000000000000000000", 00:21:24.815 "model_number": "SPDK bdev Controller", 00:21:24.815 "max_namespaces": 32, 00:21:24.815 "min_cntlid": 1, 00:21:24.815 "max_cntlid": 65519, 00:21:24.815 "ana_reporting": false 00:21:24.815 } 00:21:24.815 }, 00:21:24.815 { 00:21:24.815 "method": "nvmf_subsystem_add_host", 00:21:24.815 "params": { 00:21:24.815 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.815 "host": "nqn.2016-06.io.spdk:host1", 00:21:24.815 "psk": "key0" 00:21:24.815 } 00:21:24.815 }, 00:21:24.815 { 00:21:24.815 "method": "nvmf_subsystem_add_ns", 00:21:24.815 "params": { 00:21:24.815 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.815 "namespace": { 00:21:24.815 "nsid": 1, 00:21:24.815 "bdev_name": "malloc0", 00:21:24.815 "nguid": "0ABA9423017B453189A86BFAFCE7467E", 00:21:24.815 "uuid": "0aba9423-017b-4531-89a8-6bfafce7467e", 00:21:24.815 "no_auto_visible": false 00:21:24.815 } 00:21:24.815 } 00:21:24.815 }, 00:21:24.815 { 00:21:24.815 "method": "nvmf_subsystem_add_listener", 00:21:24.815 "params": { 00:21:24.815 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.815 "listen_address": { 00:21:24.815 "trtype": "TCP", 00:21:24.815 "adrfam": "IPv4", 00:21:24.815 "traddr": "10.0.0.2", 00:21:24.815 "trsvcid": "4420" 00:21:24.815 }, 00:21:24.815 "secure_channel": true 00:21:24.815 } 00:21:24.815 } 00:21:24.815 ] 00:21:24.815 } 00:21:24.815 ] 00:21:24.815 }' 00:21:24.815 09:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:25.076 09:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:25.076 "subsystems": [ 00:21:25.076 { 00:21:25.076 "subsystem": "keyring", 00:21:25.076 "config": [ 00:21:25.076 { 00:21:25.076 "method": "keyring_file_add_key", 00:21:25.076 "params": { 00:21:25.076 "name": "key0", 00:21:25.076 "path": "/tmp/tmp.lRIeTRrdM7" 00:21:25.076 } 00:21:25.076 } 00:21:25.076 ] 00:21:25.076 }, 00:21:25.076 { 00:21:25.076 "subsystem": "iobuf", 00:21:25.076 "config": [ 00:21:25.076 { 00:21:25.076 "method": "iobuf_set_options", 00:21:25.076 "params": { 00:21:25.076 "small_pool_count": 8192, 00:21:25.076 "large_pool_count": 1024, 00:21:25.076 "small_bufsize": 8192, 00:21:25.076 "large_bufsize": 135168 00:21:25.076 } 00:21:25.076 } 00:21:25.076 ] 00:21:25.076 }, 00:21:25.076 { 00:21:25.076 "subsystem": "sock", 00:21:25.076 "config": [ 00:21:25.076 { 00:21:25.076 "method": "sock_set_default_impl", 00:21:25.076 "params": { 00:21:25.076 "impl_name": "posix" 00:21:25.076 } 00:21:25.076 }, 00:21:25.076 { 00:21:25.076 "method": "sock_impl_set_options", 00:21:25.076 "params": { 00:21:25.076 "impl_name": "ssl", 00:21:25.076 "recv_buf_size": 4096, 00:21:25.077 "send_buf_size": 4096, 00:21:25.077 "enable_recv_pipe": true, 00:21:25.077 "enable_quickack": false, 00:21:25.077 "enable_placement_id": 0, 00:21:25.077 "enable_zerocopy_send_server": true, 00:21:25.077 "enable_zerocopy_send_client": false, 00:21:25.077 "zerocopy_threshold": 0, 00:21:25.077 "tls_version": 0, 00:21:25.077 "enable_ktls": false 00:21:25.077 } 00:21:25.077 }, 00:21:25.077 { 00:21:25.077 "method": "sock_impl_set_options", 00:21:25.077 "params": { 00:21:25.077 "impl_name": "posix", 00:21:25.077 "recv_buf_size": 2097152, 00:21:25.077 "send_buf_size": 2097152, 00:21:25.077 "enable_recv_pipe": true, 00:21:25.077 "enable_quickack": false, 00:21:25.077 "enable_placement_id": 0, 00:21:25.077 "enable_zerocopy_send_server": true, 00:21:25.077 "enable_zerocopy_send_client": false, 00:21:25.077 "zerocopy_threshold": 0, 00:21:25.077 "tls_version": 0, 00:21:25.077 "enable_ktls": false 00:21:25.077 } 00:21:25.077 } 00:21:25.077 ] 00:21:25.077 }, 00:21:25.077 { 00:21:25.077 "subsystem": "vmd", 00:21:25.077 "config": [] 00:21:25.077 }, 00:21:25.077 { 00:21:25.077 "subsystem": "accel", 00:21:25.077 "config": [ 00:21:25.077 { 00:21:25.077 "method": "accel_set_options", 00:21:25.077 "params": { 00:21:25.077 "small_cache_size": 128, 00:21:25.077 "large_cache_size": 16, 00:21:25.077 "task_count": 2048, 00:21:25.077 "sequence_count": 2048, 00:21:25.077 "buf_count": 2048 00:21:25.077 } 00:21:25.077 } 00:21:25.077 ] 00:21:25.077 }, 00:21:25.077 { 00:21:25.077 "subsystem": "bdev", 00:21:25.077 "config": [ 00:21:25.077 { 00:21:25.077 "method": "bdev_set_options", 00:21:25.077 "params": { 00:21:25.077 "bdev_io_pool_size": 65535, 00:21:25.077 "bdev_io_cache_size": 256, 00:21:25.077 "bdev_auto_examine": true, 00:21:25.077 "iobuf_small_cache_size": 128, 00:21:25.077 "iobuf_large_cache_size": 16 00:21:25.077 } 00:21:25.077 }, 00:21:25.077 { 00:21:25.077 "method": "bdev_raid_set_options", 00:21:25.077 "params": { 00:21:25.077 "process_window_size_kb": 1024 00:21:25.077 } 00:21:25.077 }, 00:21:25.077 { 00:21:25.077 "method": "bdev_iscsi_set_options", 00:21:25.077 "params": { 00:21:25.077 "timeout_sec": 30 00:21:25.077 } 00:21:25.077 }, 00:21:25.077 { 00:21:25.077 "method": "bdev_nvme_set_options", 00:21:25.077 "params": { 00:21:25.077 "action_on_timeout": "none", 00:21:25.077 "timeout_us": 0, 00:21:25.077 "timeout_admin_us": 0, 00:21:25.077 "keep_alive_timeout_ms": 10000, 00:21:25.077 "arbitration_burst": 0, 00:21:25.077 "low_priority_weight": 0, 00:21:25.077 "medium_priority_weight": 0, 00:21:25.077 "high_priority_weight": 0, 00:21:25.077 "nvme_adminq_poll_period_us": 10000, 00:21:25.077 "nvme_ioq_poll_period_us": 0, 00:21:25.077 "io_queue_requests": 512, 00:21:25.077 "delay_cmd_submit": true, 00:21:25.077 "transport_retry_count": 4, 00:21:25.077 "bdev_retry_count": 3, 00:21:25.077 "transport_ack_timeout": 0, 00:21:25.077 "ctrlr_loss_timeout_sec": 0, 00:21:25.077 "reconnect_delay_sec": 0, 00:21:25.077 "fast_io_fail_timeout_sec": 0, 00:21:25.077 "disable_auto_failback": false, 00:21:25.077 "generate_uuids": false, 00:21:25.077 "transport_tos": 0, 00:21:25.077 "nvme_error_stat": false, 00:21:25.077 "rdma_srq_size": 0, 00:21:25.077 "io_path_stat": false, 00:21:25.077 "allow_accel_sequence": false, 00:21:25.077 "rdma_max_cq_size": 0, 00:21:25.077 "rdma_cm_event_timeout_ms": 0, 00:21:25.077 "dhchap_digests": [ 00:21:25.077 "sha256", 00:21:25.077 "sha384", 00:21:25.077 "sha512" 00:21:25.077 ], 00:21:25.077 "dhchap_dhgroups": [ 00:21:25.077 "null", 00:21:25.077 "ffdhe2048", 00:21:25.077 "ffdhe3072", 00:21:25.077 "ffdhe4096", 00:21:25.077 "ffdhe6144", 00:21:25.077 "ffdhe8192" 00:21:25.077 ] 00:21:25.077 } 00:21:25.077 }, 00:21:25.077 { 00:21:25.077 "method": "bdev_nvme_attach_controller", 00:21:25.077 "params": { 00:21:25.077 "name": "nvme0", 00:21:25.077 "trtype": "TCP", 00:21:25.077 "adrfam": "IPv4", 00:21:25.077 "traddr": "10.0.0.2", 00:21:25.077 "trsvcid": "4420", 00:21:25.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.077 "prchk_reftag": false, 00:21:25.077 "prchk_guard": false, 00:21:25.077 "ctrlr_loss_timeout_sec": 0, 00:21:25.077 "reconnect_delay_sec": 0, 00:21:25.077 "fast_io_fail_timeout_sec": 0, 00:21:25.077 "psk": "key0", 00:21:25.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:25.077 "hdgst": false, 00:21:25.077 "ddgst": false 00:21:25.077 } 00:21:25.077 }, 00:21:25.077 { 00:21:25.077 "method": "bdev_nvme_set_hotplug", 00:21:25.077 "params": { 00:21:25.077 "period_us": 100000, 00:21:25.077 "enable": false 00:21:25.077 } 00:21:25.077 }, 00:21:25.077 { 00:21:25.077 "method": "bdev_enable_histogram", 00:21:25.077 "params": { 00:21:25.077 "name": "nvme0n1", 00:21:25.077 "enable": true 00:21:25.077 } 00:21:25.077 }, 00:21:25.077 { 00:21:25.077 "method": "bdev_wait_for_examine" 00:21:25.077 } 00:21:25.077 ] 00:21:25.077 }, 00:21:25.077 { 00:21:25.077 "subsystem": "nbd", 00:21:25.077 "config": [] 00:21:25.077 } 00:21:25.077 ] 00:21:25.077 }' 00:21:25.077 09:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1186386 00:21:25.077 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1186386 ']' 00:21:25.077 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1186386 00:21:25.077 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:25.077 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:25.077 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1186386 00:21:25.077 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:25.077 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:25.077 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1186386' 00:21:25.077 killing process with pid 1186386 00:21:25.077 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1186386 00:21:25.077 Received shutdown signal, test time was about 1.000000 seconds 00:21:25.077 00:21:25.077 Latency(us) 00:21:25.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.077 =================================================================================================================== 00:21:25.077 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.077 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1186386 00:21:25.077 09:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1186294 00:21:25.077 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1186294 ']' 00:21:25.078 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1186294 00:21:25.078 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:25.078 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:25.078 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1186294 00:21:25.339 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:25.339 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:25.339 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1186294' 00:21:25.339 killing process with pid 1186294 00:21:25.339 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1186294 00:21:25.339 09:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1186294 00:21:25.339 09:35:57 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:25.339 09:35:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:25.339 09:35:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:25.339 09:35:57 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:25.339 "subsystems": [ 00:21:25.339 { 00:21:25.339 "subsystem": "keyring", 00:21:25.339 "config": [ 00:21:25.339 { 00:21:25.339 "method": "keyring_file_add_key", 00:21:25.339 "params": { 00:21:25.339 "name": "key0", 00:21:25.339 "path": "/tmp/tmp.lRIeTRrdM7" 00:21:25.339 } 00:21:25.339 } 00:21:25.339 ] 00:21:25.339 }, 00:21:25.339 { 00:21:25.339 "subsystem": "iobuf", 00:21:25.339 "config": [ 00:21:25.339 { 00:21:25.339 "method": "iobuf_set_options", 00:21:25.339 "params": { 00:21:25.339 "small_pool_count": 8192, 00:21:25.339 "large_pool_count": 1024, 00:21:25.339 "small_bufsize": 8192, 00:21:25.339 "large_bufsize": 135168 00:21:25.339 } 00:21:25.339 } 00:21:25.339 ] 00:21:25.339 }, 00:21:25.339 { 00:21:25.339 "subsystem": "sock", 00:21:25.339 "config": [ 00:21:25.339 { 00:21:25.339 "method": "sock_set_default_impl", 00:21:25.339 "params": { 00:21:25.339 "impl_name": "posix" 00:21:25.339 } 00:21:25.339 }, 00:21:25.339 { 00:21:25.339 "method": "sock_impl_set_options", 00:21:25.339 "params": { 00:21:25.339 "impl_name": "ssl", 00:21:25.339 "recv_buf_size": 4096, 00:21:25.339 "send_buf_size": 4096, 00:21:25.339 "enable_recv_pipe": true, 00:21:25.339 "enable_quickack": false, 00:21:25.339 "enable_placement_id": 0, 00:21:25.339 "enable_zerocopy_send_server": true, 00:21:25.339 "enable_zerocopy_send_client": false, 00:21:25.339 "zerocopy_threshold": 0, 00:21:25.339 "tls_version": 0, 00:21:25.339 "enable_ktls": false 00:21:25.339 } 00:21:25.339 }, 00:21:25.339 { 00:21:25.339 "method": "sock_impl_set_options", 00:21:25.339 "params": { 00:21:25.339 "impl_name": "posix", 00:21:25.339 "recv_buf_size": 2097152, 00:21:25.339 "send_buf_size": 2097152, 00:21:25.339 "enable_recv_pipe": true, 00:21:25.339 "enable_quickack": false, 00:21:25.339 "enable_placement_id": 0, 00:21:25.339 "enable_zerocopy_send_server": true, 00:21:25.339 "enable_zerocopy_send_client": false, 00:21:25.339 "zerocopy_threshold": 0, 00:21:25.339 "tls_version": 0, 00:21:25.339 "enable_ktls": false 00:21:25.339 } 00:21:25.339 } 00:21:25.339 ] 00:21:25.339 }, 00:21:25.339 { 00:21:25.339 "subsystem": "vmd", 00:21:25.339 "config": [] 00:21:25.339 }, 00:21:25.339 { 00:21:25.339 "subsystem": "accel", 00:21:25.339 "config": [ 00:21:25.339 { 00:21:25.339 "method": "accel_set_options", 00:21:25.339 "params": { 00:21:25.339 "small_cache_size": 128, 00:21:25.339 "large_cache_size": 16, 00:21:25.339 "task_count": 2048, 00:21:25.339 "sequence_count": 2048, 00:21:25.339 "buf_count": 2048 00:21:25.339 } 00:21:25.339 } 00:21:25.339 ] 00:21:25.339 }, 00:21:25.339 { 00:21:25.339 "subsystem": "bdev", 00:21:25.339 "config": [ 00:21:25.339 { 00:21:25.339 "method": "bdev_set_options", 00:21:25.339 "params": { 00:21:25.339 "bdev_io_pool_size": 65535, 00:21:25.339 "bdev_io_cache_size": 256, 00:21:25.339 "bdev_auto_examine": true, 00:21:25.339 "iobuf_small_cache_size": 128, 00:21:25.339 "iobuf_large_cache_size": 16 00:21:25.339 } 00:21:25.339 }, 00:21:25.339 { 00:21:25.339 "method": "bdev_raid_set_options", 00:21:25.339 "params": { 00:21:25.339 "process_window_size_kb": 1024 00:21:25.339 } 00:21:25.339 }, 00:21:25.339 { 00:21:25.339 "method": "bdev_iscsi_set_options", 00:21:25.339 "params": { 00:21:25.339 "timeout_sec": 30 00:21:25.339 } 00:21:25.339 }, 00:21:25.339 { 00:21:25.339 "method": "bdev_nvme_set_options", 00:21:25.339 "params": { 00:21:25.339 "action_on_timeout": "none", 00:21:25.339 "timeout_us": 0, 00:21:25.339 "timeout_admin_us": 0, 00:21:25.339 "keep_alive_timeout_ms": 10000, 00:21:25.339 "arbitration_burst": 0, 00:21:25.339 "low_priority_weight": 0, 00:21:25.339 "medium_priority_weight": 0, 00:21:25.339 "high_priority_weight": 0, 00:21:25.339 "nvme_adminq_poll_period_us": 10000, 00:21:25.339 "nvme_ioq_poll_period_us": 0, 00:21:25.339 "io_queue_requests": 0, 00:21:25.339 "delay_cmd_submit": true, 00:21:25.339 "transport_retry_count": 4, 00:21:25.339 "bdev_retry_count": 3, 00:21:25.339 "transport_ack_timeout": 0, 00:21:25.339 "ctrlr_loss_timeout_sec": 0, 00:21:25.339 "reconnect_delay_sec": 0, 00:21:25.339 "fast_io_fail_timeout_sec": 0, 00:21:25.339 "disable_auto_failback": false, 00:21:25.339 "generate_uuids": false, 00:21:25.339 "transport_tos": 0, 00:21:25.339 "nvme_error_stat": false, 00:21:25.339 "rdma_srq_size": 0, 00:21:25.339 "io_path_stat": false, 00:21:25.339 "allow_accel_sequence": false, 00:21:25.339 "rdma_max_cq_size": 0, 00:21:25.339 "rdma_cm_event_timeout_ms": 0, 00:21:25.339 "dhchap_digests": [ 00:21:25.339 "sha256", 00:21:25.339 "sha384", 00:21:25.339 "sha512" 00:21:25.339 ], 00:21:25.339 "dhchap_dhgroups": [ 00:21:25.339 "null", 00:21:25.339 "ffdhe2048", 00:21:25.339 "ffdhe3072", 00:21:25.339 "ffdhe4096", 00:21:25.339 "ffdhe6144", 00:21:25.339 "ffdhe8192" 00:21:25.339 ] 00:21:25.339 } 00:21:25.339 }, 00:21:25.339 { 00:21:25.340 "method": "bdev_nvme_set_hotplug", 00:21:25.340 "params": { 00:21:25.340 "period_us": 100000, 00:21:25.340 "enable": false 00:21:25.340 } 00:21:25.340 }, 00:21:25.340 { 00:21:25.340 "method": "bdev_malloc_create", 00:21:25.340 "params": { 00:21:25.340 "name": "malloc0", 00:21:25.340 "num_blocks": 8192, 00:21:25.340 "block_size": 4096, 00:21:25.340 "physical_block_size": 4096, 00:21:25.340 "uuid": "0aba9423-017b-4531-89a8-6bfafce7467e", 00:21:25.340 "optimal_io_boundary": 0 00:21:25.340 } 00:21:25.340 }, 00:21:25.340 { 00:21:25.340 "method": "bdev_wait_for_examine" 00:21:25.340 } 00:21:25.340 ] 00:21:25.340 }, 00:21:25.340 { 00:21:25.340 "subsystem": "nbd", 00:21:25.340 "config": [] 00:21:25.340 }, 00:21:25.340 { 00:21:25.340 "subsystem": "scheduler", 00:21:25.340 "config": [ 00:21:25.340 { 00:21:25.340 "method": "framework_set_scheduler", 00:21:25.340 "params": { 00:21:25.340 "name": "static" 00:21:25.340 } 00:21:25.340 } 00:21:25.340 ] 00:21:25.340 }, 00:21:25.340 { 00:21:25.340 "subsystem": "nvmf", 00:21:25.340 "config": [ 00:21:25.340 { 00:21:25.340 "method": "nvmf_set_config", 00:21:25.340 "params": { 00:21:25.340 "discovery_filter": "match_any", 00:21:25.340 "admin_cmd_passthru": { 00:21:25.340 "identify_ctrlr": false 00:21:25.340 } 00:21:25.340 } 00:21:25.340 }, 00:21:25.340 { 00:21:25.340 "method": "nvmf_set_max_subsystems", 00:21:25.340 "params": { 00:21:25.340 "max_subsystems": 1024 00:21:25.340 } 00:21:25.340 }, 00:21:25.340 { 00:21:25.340 "method": "nvmf_set_crdt", 00:21:25.340 "params": { 00:21:25.340 "crdt1": 0, 00:21:25.340 "crdt2": 0, 00:21:25.340 "crdt3": 0 00:21:25.340 } 00:21:25.340 }, 00:21:25.340 { 00:21:25.340 "method": "nvmf_create_transport", 00:21:25.340 "params": { 00:21:25.340 "trtype": "TCP", 00:21:25.340 "max_queue_depth": 128, 00:21:25.340 "max_io_qpairs_per_ctrlr": 127, 00:21:25.340 "in_capsule_data_size": 4096, 00:21:25.340 "max_io_size": 131072, 00:21:25.340 "io_unit_size": 131072, 00:21:25.340 "max_aq_depth": 128, 00:21:25.340 "num_shared_buffers": 511, 00:21:25.340 "buf_cache_size": 4294967295, 00:21:25.340 "dif_insert_or_strip": false, 00:21:25.340 "zcopy": false, 00:21:25.340 "c2h_success": false, 00:21:25.340 "sock_priority": 0, 00:21:25.340 "abort_timeout_sec": 1, 00:21:25.340 "ack_timeout": 0, 00:21:25.340 "data_wr_pool_size": 0 00:21:25.340 } 00:21:25.340 }, 00:21:25.340 { 00:21:25.340 "method": "nvmf_create_subsystem", 00:21:25.340 "params": { 00:21:25.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 09:35:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.340 00:21:25.340 "allow_any_host": false, 00:21:25.340 "serial_number": "00000000000000000000", 00:21:25.340 "model_number": "SPDK bdev Controller", 00:21:25.340 "max_namespaces": 32, 00:21:25.340 "min_cntlid": 1, 00:21:25.340 "max_cntlid": 65519, 00:21:25.340 "ana_reporting": false 00:21:25.340 } 00:21:25.340 }, 00:21:25.340 { 00:21:25.340 "method": "nvmf_subsystem_add_host", 00:21:25.340 "params": { 00:21:25.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.340 "host": "nqn.2016-06.io.spdk:host1", 00:21:25.340 "psk": "key0" 00:21:25.340 } 00:21:25.340 }, 00:21:25.340 { 00:21:25.340 "method": "nvmf_subsystem_add_ns", 00:21:25.340 "params": { 00:21:25.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.340 "namespace": { 00:21:25.340 "nsid": 1, 00:21:25.340 "bdev_name": "malloc0", 00:21:25.340 "nguid": "0ABA9423017B453189A86BFAFCE7467E", 00:21:25.340 "uuid": "0aba9423-017b-4531-89a8-6bfafce7467e", 00:21:25.340 "no_auto_visible": false 00:21:25.340 } 00:21:25.340 } 00:21:25.340 }, 00:21:25.340 { 00:21:25.340 "method": "nvmf_subsystem_add_listener", 00:21:25.340 "params": { 00:21:25.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.340 "listen_address": { 00:21:25.340 "trtype": "TCP", 00:21:25.340 "adrfam": "IPv4", 00:21:25.340 "traddr": "10.0.0.2", 00:21:25.340 "trsvcid": "4420" 00:21:25.340 }, 00:21:25.340 "secure_channel": true 00:21:25.340 } 00:21:25.340 } 00:21:25.340 ] 00:21:25.340 } 00:21:25.340 ] 00:21:25.340 }' 00:21:25.340 09:35:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1187066 00:21:25.340 09:35:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1187066 00:21:25.340 09:35:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:25.340 09:35:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1187066 ']' 00:21:25.340 09:35:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.340 09:35:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:25.340 09:35:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.340 09:35:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:25.340 09:35:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.340 [2024-06-11 09:35:57.133630] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:21:25.340 [2024-06-11 09:35:57.133698] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.602 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.602 [2024-06-11 09:35:57.214956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.602 [2024-06-11 09:35:57.279603] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.602 [2024-06-11 09:35:57.279637] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.602 [2024-06-11 09:35:57.279645] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.602 [2024-06-11 09:35:57.279651] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.602 [2024-06-11 09:35:57.279657] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.602 [2024-06-11 09:35:57.279715] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.863 [2024-06-11 09:35:57.476692] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.863 [2024-06-11 09:35:57.508700] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:25.863 [2024-06-11 09:35:57.517634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.435 09:35:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:26.435 09:35:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:26.435 09:35:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:26.435 09:35:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:26.435 09:35:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.435 09:35:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.435 09:35:58 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1187119 00:21:26.435 09:35:58 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1187119 /var/tmp/bdevperf.sock 00:21:26.435 09:35:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 1187119 ']' 00:21:26.435 09:35:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.435 09:35:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:26.435 09:35:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.435 09:35:58 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:26.435 09:35:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:26.435 09:35:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.435 09:35:58 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:26.435 "subsystems": [ 00:21:26.435 { 00:21:26.435 "subsystem": "keyring", 00:21:26.435 "config": [ 00:21:26.435 { 00:21:26.435 "method": "keyring_file_add_key", 00:21:26.435 "params": { 00:21:26.435 "name": "key0", 00:21:26.435 "path": "/tmp/tmp.lRIeTRrdM7" 00:21:26.435 } 00:21:26.435 } 00:21:26.435 ] 00:21:26.435 }, 00:21:26.435 { 00:21:26.435 "subsystem": "iobuf", 00:21:26.435 "config": [ 00:21:26.435 { 00:21:26.435 "method": "iobuf_set_options", 00:21:26.435 "params": { 00:21:26.435 "small_pool_count": 8192, 00:21:26.435 "large_pool_count": 1024, 00:21:26.435 "small_bufsize": 8192, 00:21:26.435 "large_bufsize": 135168 00:21:26.435 } 00:21:26.435 } 00:21:26.435 ] 00:21:26.435 }, 00:21:26.435 { 00:21:26.435 "subsystem": "sock", 00:21:26.435 "config": [ 00:21:26.435 { 00:21:26.435 "method": "sock_set_default_impl", 00:21:26.435 "params": { 00:21:26.435 "impl_name": "posix" 00:21:26.435 } 00:21:26.435 }, 00:21:26.435 { 00:21:26.435 "method": "sock_impl_set_options", 00:21:26.435 "params": { 00:21:26.435 "impl_name": "ssl", 00:21:26.435 "recv_buf_size": 4096, 00:21:26.435 "send_buf_size": 4096, 00:21:26.435 "enable_recv_pipe": true, 00:21:26.435 "enable_quickack": false, 00:21:26.435 "enable_placement_id": 0, 00:21:26.435 "enable_zerocopy_send_server": true, 00:21:26.435 "enable_zerocopy_send_client": false, 00:21:26.435 "zerocopy_threshold": 0, 00:21:26.435 "tls_version": 0, 00:21:26.435 "enable_ktls": false 00:21:26.435 } 00:21:26.435 }, 00:21:26.435 { 00:21:26.435 "method": "sock_impl_set_options", 00:21:26.435 "params": { 00:21:26.435 "impl_name": "posix", 00:21:26.435 "recv_buf_size": 2097152, 00:21:26.435 "send_buf_size": 2097152, 00:21:26.435 "enable_recv_pipe": true, 00:21:26.435 "enable_quickack": false, 00:21:26.435 "enable_placement_id": 0, 00:21:26.435 "enable_zerocopy_send_server": true, 00:21:26.435 "enable_zerocopy_send_client": false, 00:21:26.435 "zerocopy_threshold": 0, 00:21:26.435 "tls_version": 0, 00:21:26.435 "enable_ktls": false 00:21:26.435 } 00:21:26.435 } 00:21:26.435 ] 00:21:26.435 }, 00:21:26.435 { 00:21:26.435 "subsystem": "vmd", 00:21:26.435 "config": [] 00:21:26.435 }, 00:21:26.435 { 00:21:26.435 "subsystem": "accel", 00:21:26.435 "config": [ 00:21:26.435 { 00:21:26.435 "method": "accel_set_options", 00:21:26.435 "params": { 00:21:26.436 "small_cache_size": 128, 00:21:26.436 "large_cache_size": 16, 00:21:26.436 "task_count": 2048, 00:21:26.436 "sequence_count": 2048, 00:21:26.436 "buf_count": 2048 00:21:26.436 } 00:21:26.436 } 00:21:26.436 ] 00:21:26.436 }, 00:21:26.436 { 00:21:26.436 "subsystem": "bdev", 00:21:26.436 "config": [ 00:21:26.436 { 00:21:26.436 "method": "bdev_set_options", 00:21:26.436 "params": { 00:21:26.436 "bdev_io_pool_size": 65535, 00:21:26.436 "bdev_io_cache_size": 256, 00:21:26.436 "bdev_auto_examine": true, 00:21:26.436 "iobuf_small_cache_size": 128, 00:21:26.436 "iobuf_large_cache_size": 16 00:21:26.436 } 00:21:26.436 }, 00:21:26.436 { 00:21:26.436 "method": "bdev_raid_set_options", 00:21:26.436 "params": { 00:21:26.436 "process_window_size_kb": 1024 00:21:26.436 } 00:21:26.436 }, 00:21:26.436 { 00:21:26.436 "method": "bdev_iscsi_set_options", 00:21:26.436 "params": { 00:21:26.436 "timeout_sec": 30 00:21:26.436 } 00:21:26.436 }, 00:21:26.436 { 00:21:26.436 "method": "bdev_nvme_set_options", 00:21:26.436 "params": { 00:21:26.436 "action_on_timeout": "none", 00:21:26.436 "timeout_us": 0, 00:21:26.436 "timeout_admin_us": 0, 00:21:26.436 "keep_alive_timeout_ms": 10000, 00:21:26.436 "arbitration_burst": 0, 00:21:26.436 "low_priority_weight": 0, 00:21:26.436 "medium_priority_weight": 0, 00:21:26.436 "high_priority_weight": 0, 00:21:26.436 "nvme_adminq_poll_period_us": 10000, 00:21:26.436 "nvme_ioq_poll_period_us": 0, 00:21:26.436 "io_queue_requests": 512, 00:21:26.436 "delay_cmd_submit": true, 00:21:26.436 "transport_retry_count": 4, 00:21:26.436 "bdev_retry_count": 3, 00:21:26.436 "transport_ack_timeout": 0, 00:21:26.436 "ctrlr_loss_timeout_sec": 0, 00:21:26.436 "reconnect_delay_sec": 0, 00:21:26.436 "fast_io_fail_timeout_sec": 0, 00:21:26.436 "disable_auto_failback": false, 00:21:26.436 "generate_uuids": false, 00:21:26.436 "transport_tos": 0, 00:21:26.436 "nvme_error_stat": false, 00:21:26.436 "rdma_srq_size": 0, 00:21:26.436 "io_path_stat": false, 00:21:26.436 "allow_accel_sequence": false, 00:21:26.436 "rdma_max_cq_size": 0, 00:21:26.436 "rdma_cm_event_timeout_ms": 0, 00:21:26.436 "dhchap_digests": [ 00:21:26.436 "sha256", 00:21:26.436 "sha384", 00:21:26.436 "sha512" 00:21:26.436 ], 00:21:26.436 "dhchap_dhgroups": [ 00:21:26.436 "null", 00:21:26.436 "ffdhe2048", 00:21:26.436 "ffdhe3072", 00:21:26.436 "ffdhe4096", 00:21:26.436 "ffdhe6144", 00:21:26.436 "ffdhe8192" 00:21:26.436 ] 00:21:26.436 } 00:21:26.436 }, 00:21:26.436 { 00:21:26.436 "method": "bdev_nvme_attach_controller", 00:21:26.436 "params": { 00:21:26.436 "name": "nvme0", 00:21:26.436 "trtype": "TCP", 00:21:26.436 "adrfam": "IPv4", 00:21:26.436 "traddr": "10.0.0.2", 00:21:26.436 "trsvcid": "4420", 00:21:26.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.436 "prchk_reftag": false, 00:21:26.436 "prchk_guard": false, 00:21:26.436 "ctrlr_loss_timeout_sec": 0, 00:21:26.436 "reconnect_delay_sec": 0, 00:21:26.436 "fast_io_fail_timeout_sec": 0, 00:21:26.436 "psk": "key0", 00:21:26.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:26.436 "hdgst": false, 00:21:26.436 "ddgst": false 00:21:26.436 } 00:21:26.436 }, 00:21:26.436 { 00:21:26.436 "method": "bdev_nvme_set_hotplug", 00:21:26.436 "params": { 00:21:26.436 "period_us": 100000, 00:21:26.436 "enable": false 00:21:26.436 } 00:21:26.436 }, 00:21:26.436 { 00:21:26.436 "method": "bdev_enable_histogram", 00:21:26.436 "params": { 00:21:26.436 "name": "nvme0n1", 00:21:26.436 "enable": true 00:21:26.436 } 00:21:26.436 }, 00:21:26.436 { 00:21:26.436 "method": "bdev_wait_for_examine" 00:21:26.436 } 00:21:26.436 ] 00:21:26.436 }, 00:21:26.436 { 00:21:26.436 "subsystem": "nbd", 00:21:26.436 "config": [] 00:21:26.436 } 00:21:26.436 ] 00:21:26.436 }' 00:21:26.436 [2024-06-11 09:35:58.068371] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:21:26.436 [2024-06-11 09:35:58.068421] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187119 ] 00:21:26.436 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.436 [2024-06-11 09:35:58.125759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.436 [2024-06-11 09:35:58.189817] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.697 [2024-06-11 09:35:58.328857] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.269 09:35:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:27.269 09:35:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:27.269 09:35:58 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:27.269 09:35:58 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:27.529 09:35:59 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.529 09:35:59 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:27.529 Running I/O for 1 seconds... 00:21:28.471 00:21:28.471 Latency(us) 00:21:28.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.472 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:28.472 Verification LBA range: start 0x0 length 0x2000 00:21:28.472 nvme0n1 : 1.04 2035.18 7.95 0.00 0.00 61931.93 6389.76 139810.13 00:21:28.472 =================================================================================================================== 00:21:28.472 Total : 2035.18 7.95 0.00 0.00 61931.93 6389.76 139810.13 00:21:28.734 0 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:28.734 nvmf_trace.0 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1187119 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1187119 ']' 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1187119 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1187119 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1187119' 00:21:28.734 killing process with pid 1187119 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1187119 00:21:28.734 Received shutdown signal, test time was about 1.000000 seconds 00:21:28.734 00:21:28.734 Latency(us) 00:21:28.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.734 =================================================================================================================== 00:21:28.734 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:28.734 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1187119 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:28.996 rmmod nvme_tcp 00:21:28.996 rmmod nvme_fabrics 00:21:28.996 rmmod nvme_keyring 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1187066 ']' 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1187066 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 1187066 ']' 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 1187066 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1187066 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1187066' 00:21:28.996 killing process with pid 1187066 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 1187066 00:21:28.996 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 1187066 00:21:29.257 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:29.257 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:29.257 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:29.257 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:29.257 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:29.257 09:36:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.257 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.257 09:36:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.176 09:36:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:31.176 09:36:02 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.qtDdo77mdR /tmp/tmp.ufo6izfGeK /tmp/tmp.lRIeTRrdM7 00:21:31.176 00:21:31.176 real 1m21.057s 00:21:31.176 user 2m4.134s 00:21:31.176 sys 0m27.003s 00:21:31.176 09:36:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:31.176 09:36:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.176 ************************************ 00:21:31.176 END TEST nvmf_tls 00:21:31.176 ************************************ 00:21:31.176 09:36:02 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:31.176 09:36:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:31.176 09:36:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:31.176 09:36:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.498 ************************************ 00:21:31.498 START TEST nvmf_fips 00:21:31.498 ************************************ 00:21:31.498 09:36:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:31.498 * Looking for test storage... 00:21:31.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:31.498 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:21:31.499 Error setting digest 00:21:31.499 00325BBC667F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:31.499 00325BBC667F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:31.499 09:36:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:39.669 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:39.669 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:39.669 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:39.669 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.669 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:39.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:21:39.670 00:21:39.670 --- 10.0.0.2 ping statistics --- 00:21:39.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.670 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.439 ms 00:21:39.670 00:21:39.670 --- 10.0.0.1 ping statistics --- 00:21:39.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.670 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1192107 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1192107 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 1192107 ']' 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:39.670 09:36:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:39.670 [2024-06-11 09:36:10.505595] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:21:39.670 [2024-06-11 09:36:10.505669] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.670 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.670 [2024-06-11 09:36:10.575405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.670 [2024-06-11 09:36:10.651207] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.670 [2024-06-11 09:36:10.651248] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.670 [2024-06-11 09:36:10.651256] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.670 [2024-06-11 09:36:10.651262] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.670 [2024-06-11 09:36:10.651268] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.670 [2024-06-11 09:36:10.651291] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.670 09:36:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:39.670 09:36:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:21:39.670 09:36:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:39.670 09:36:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:39.670 09:36:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:39.670 09:36:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.670 09:36:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:39.670 09:36:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:39.670 09:36:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.670 09:36:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:39.670 09:36:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.670 09:36:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.670 09:36:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:39.670 09:36:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:39.931 [2024-06-11 09:36:11.582812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.931 [2024-06-11 09:36:11.598818] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.931 [2024-06-11 09:36:11.599014] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.931 [2024-06-11 09:36:11.625594] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:39.931 malloc0 00:21:39.931 09:36:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:39.931 09:36:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1192602 00:21:39.931 09:36:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1192602 /var/tmp/bdevperf.sock 00:21:39.931 09:36:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:39.931 09:36:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 1192602 ']' 00:21:39.931 09:36:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.931 09:36:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:39.931 09:36:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.931 09:36:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:39.931 09:36:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:39.931 [2024-06-11 09:36:11.720554] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:21:39.931 [2024-06-11 09:36:11.720605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192602 ] 00:21:39.931 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.192 [2024-06-11 09:36:11.769523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.192 [2024-06-11 09:36:11.821577] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.192 09:36:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:40.192 09:36:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:21:40.192 09:36:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:40.453 [2024-06-11 09:36:12.081500] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.453 [2024-06-11 09:36:12.081561] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:40.453 TLSTESTn1 00:21:40.453 09:36:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:40.713 Running I/O for 10 seconds... 00:21:50.713 00:21:50.713 Latency(us) 00:21:50.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.713 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:50.713 Verification LBA range: start 0x0 length 0x2000 00:21:50.713 TLSTESTn1 : 10.05 3536.80 13.82 0.00 0.00 36089.79 5816.32 64225.28 00:21:50.713 =================================================================================================================== 00:21:50.713 Total : 3536.80 13.82 0.00 0.00 36089.79 5816.32 64225.28 00:21:50.713 0 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:50.713 nvmf_trace.0 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1192602 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 1192602 ']' 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 1192602 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:21:50.713 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:50.714 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1192602 00:21:50.714 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:50.714 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:50.714 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1192602' 00:21:50.714 killing process with pid 1192602 00:21:50.714 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 1192602 00:21:50.714 Received shutdown signal, test time was about 10.000000 seconds 00:21:50.714 00:21:50.714 Latency(us) 00:21:50.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.714 =================================================================================================================== 00:21:50.714 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.714 [2024-06-11 09:36:22.528161] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:50.714 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 1192602 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.974 rmmod nvme_tcp 00:21:50.974 rmmod nvme_fabrics 00:21:50.974 rmmod nvme_keyring 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1192107 ']' 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1192107 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 1192107 ']' 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 1192107 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:50.974 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1192107 00:21:50.975 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:50.975 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:50.975 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1192107' 00:21:50.975 killing process with pid 1192107 00:21:50.975 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 1192107 00:21:50.975 [2024-06-11 09:36:22.768984] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:50.975 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 1192107 00:21:51.235 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:51.235 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:51.235 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:51.235 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:51.235 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:51.235 09:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.235 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.235 09:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.781 09:36:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:53.781 09:36:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:53.781 00:21:53.781 real 0m21.983s 00:21:53.781 user 0m22.512s 00:21:53.781 sys 0m9.746s 00:21:53.781 09:36:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:53.781 09:36:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:53.781 ************************************ 00:21:53.781 END TEST nvmf_fips 00:21:53.781 ************************************ 00:21:53.781 09:36:25 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:53.781 09:36:25 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:53.781 09:36:25 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:53.781 09:36:25 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:53.781 09:36:25 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.781 09:36:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.369 09:36:31 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:00.370 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:00.370 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:00.370 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:00.370 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:00.370 09:36:31 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:00.370 09:36:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:00.370 09:36:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:00.370 09:36:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:00.370 ************************************ 00:22:00.370 START TEST nvmf_perf_adq 00:22:00.370 ************************************ 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:00.370 * Looking for test storage... 00:22:00.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:00.370 09:36:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.032 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:07.033 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:07.033 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:07.033 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:07.033 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:07.033 09:36:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:08.421 09:36:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:10.336 09:36:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:15.629 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:15.629 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:15.629 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.629 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:15.630 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:15.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:22:15.630 00:22:15.630 --- 10.0.0.2 ping statistics --- 00:22:15.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.630 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:22:15.630 00:22:15.630 --- 10.0.0.1 ping statistics --- 00:22:15.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.630 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1204287 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1204287 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 1204287 ']' 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:15.630 09:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.892 [2024-06-11 09:36:47.456190] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:22:15.892 [2024-06-11 09:36:47.456257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.892 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.892 [2024-06-11 09:36:47.532797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.892 [2024-06-11 09:36:47.633280] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.892 [2024-06-11 09:36:47.633350] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.892 [2024-06-11 09:36:47.633358] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.892 [2024-06-11 09:36:47.633365] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.892 [2024-06-11 09:36:47.633371] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.892 [2024-06-11 09:36:47.633452] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.892 [2024-06-11 09:36:47.633610] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.892 [2024-06-11 09:36:47.633782] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.892 [2024-06-11 09:36:47.633783] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.836 [2024-06-11 09:36:48.511282] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.836 Malloc1 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.836 [2024-06-11 09:36:48.570709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1204612 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:16.836 09:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:16.836 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.383 09:36:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:19.383 09:36:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.383 09:36:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.383 09:36:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.383 09:36:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:19.383 "tick_rate": 2400000000, 00:22:19.383 "poll_groups": [ 00:22:19.383 { 00:22:19.383 "name": "nvmf_tgt_poll_group_000", 00:22:19.383 "admin_qpairs": 1, 00:22:19.383 "io_qpairs": 1, 00:22:19.383 "current_admin_qpairs": 1, 00:22:19.383 "current_io_qpairs": 1, 00:22:19.383 "pending_bdev_io": 0, 00:22:19.383 "completed_nvme_io": 19517, 00:22:19.383 "transports": [ 00:22:19.383 { 00:22:19.383 "trtype": "TCP" 00:22:19.383 } 00:22:19.383 ] 00:22:19.383 }, 00:22:19.383 { 00:22:19.383 "name": "nvmf_tgt_poll_group_001", 00:22:19.383 "admin_qpairs": 0, 00:22:19.383 "io_qpairs": 1, 00:22:19.383 "current_admin_qpairs": 0, 00:22:19.383 "current_io_qpairs": 1, 00:22:19.383 "pending_bdev_io": 0, 00:22:19.383 "completed_nvme_io": 26850, 00:22:19.383 "transports": [ 00:22:19.383 { 00:22:19.383 "trtype": "TCP" 00:22:19.383 } 00:22:19.383 ] 00:22:19.383 }, 00:22:19.383 { 00:22:19.383 "name": "nvmf_tgt_poll_group_002", 00:22:19.383 "admin_qpairs": 0, 00:22:19.383 "io_qpairs": 1, 00:22:19.383 "current_admin_qpairs": 0, 00:22:19.383 "current_io_qpairs": 1, 00:22:19.383 "pending_bdev_io": 0, 00:22:19.383 "completed_nvme_io": 20649, 00:22:19.383 "transports": [ 00:22:19.383 { 00:22:19.383 "trtype": "TCP" 00:22:19.383 } 00:22:19.383 ] 00:22:19.383 }, 00:22:19.383 { 00:22:19.383 "name": "nvmf_tgt_poll_group_003", 00:22:19.383 "admin_qpairs": 0, 00:22:19.383 "io_qpairs": 1, 00:22:19.383 "current_admin_qpairs": 0, 00:22:19.383 "current_io_qpairs": 1, 00:22:19.383 "pending_bdev_io": 0, 00:22:19.383 "completed_nvme_io": 20108, 00:22:19.383 "transports": [ 00:22:19.383 { 00:22:19.383 "trtype": "TCP" 00:22:19.383 } 00:22:19.383 ] 00:22:19.383 } 00:22:19.383 ] 00:22:19.383 }' 00:22:19.383 09:36:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:19.383 09:36:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:19.383 09:36:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:19.383 09:36:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:19.383 09:36:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1204612 00:22:27.525 Initializing NVMe Controllers 00:22:27.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:27.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:27.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:27.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:27.525 Initialization complete. Launching workers. 00:22:27.525 ======================================================== 00:22:27.525 Latency(us) 00:22:27.525 Device Information : IOPS MiB/s Average min max 00:22:27.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10950.00 42.77 5845.76 1888.18 10525.79 00:22:27.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14530.70 56.76 4416.92 1335.61 44420.50 00:22:27.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11263.90 44.00 5681.43 1865.84 11134.33 00:22:27.525 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10677.80 41.71 5993.13 1805.03 12135.95 00:22:27.525 ======================================================== 00:22:27.525 Total : 47422.39 185.24 5402.10 1335.61 44420.50 00:22:27.525 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.525 rmmod nvme_tcp 00:22:27.525 rmmod nvme_fabrics 00:22:27.525 rmmod nvme_keyring 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1204287 ']' 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1204287 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 1204287 ']' 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 1204287 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1204287 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1204287' 00:22:27.525 killing process with pid 1204287 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 1204287 00:22:27.525 09:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 1204287 00:22:27.525 09:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:27.525 09:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:27.525 09:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:27.525 09:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:27.525 09:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:27.525 09:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.525 09:36:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.525 09:36:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.437 09:37:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:29.437 09:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:29.437 09:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:31.350 09:37:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:33.324 09:37:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:38.614 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:38.614 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:38.614 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:38.614 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.614 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.615 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.615 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.615 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.615 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.615 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.615 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:22:38.615 00:22:38.615 --- 10.0.0.2 ping statistics --- 00:22:38.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.615 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:22:38.615 09:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:22:38.615 00:22:38.615 --- 10.0.0.1 ping statistics --- 00:22:38.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.615 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:38.615 net.core.busy_poll = 1 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:38.615 net.core.busy_read = 1 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1209329 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1209329 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 1209329 ']' 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:38.615 09:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.615 [2024-06-11 09:37:10.376032] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:22:38.615 [2024-06-11 09:37:10.376099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.615 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.876 [2024-06-11 09:37:10.468406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.876 [2024-06-11 09:37:10.565035] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.876 [2024-06-11 09:37:10.565096] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.876 [2024-06-11 09:37:10.565105] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.876 [2024-06-11 09:37:10.565112] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.876 [2024-06-11 09:37:10.565119] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.876 [2024-06-11 09:37:10.565255] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.876 [2024-06-11 09:37:10.565398] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.876 [2024-06-11 09:37:10.565500] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.876 [2024-06-11 09:37:10.565502] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.446 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:39.446 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:22:39.446 09:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.446 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:39.446 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.706 [2024-06-11 09:37:11.421602] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.706 Malloc1 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.706 [2024-06-11 09:37:11.480996] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1209440 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:39.706 09:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:39.706 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.249 09:37:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:42.250 09:37:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.250 09:37:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:42.250 09:37:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.250 09:37:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:42.250 "tick_rate": 2400000000, 00:22:42.250 "poll_groups": [ 00:22:42.250 { 00:22:42.250 "name": "nvmf_tgt_poll_group_000", 00:22:42.250 "admin_qpairs": 1, 00:22:42.250 "io_qpairs": 2, 00:22:42.250 "current_admin_qpairs": 1, 00:22:42.250 "current_io_qpairs": 2, 00:22:42.250 "pending_bdev_io": 0, 00:22:42.250 "completed_nvme_io": 27690, 00:22:42.250 "transports": [ 00:22:42.250 { 00:22:42.250 "trtype": "TCP" 00:22:42.250 } 00:22:42.250 ] 00:22:42.250 }, 00:22:42.250 { 00:22:42.250 "name": "nvmf_tgt_poll_group_001", 00:22:42.250 "admin_qpairs": 0, 00:22:42.250 "io_qpairs": 2, 00:22:42.250 "current_admin_qpairs": 0, 00:22:42.250 "current_io_qpairs": 2, 00:22:42.250 "pending_bdev_io": 0, 00:22:42.250 "completed_nvme_io": 41538, 00:22:42.250 "transports": [ 00:22:42.250 { 00:22:42.250 "trtype": "TCP" 00:22:42.250 } 00:22:42.250 ] 00:22:42.250 }, 00:22:42.250 { 00:22:42.250 "name": "nvmf_tgt_poll_group_002", 00:22:42.250 "admin_qpairs": 0, 00:22:42.250 "io_qpairs": 0, 00:22:42.250 "current_admin_qpairs": 0, 00:22:42.250 "current_io_qpairs": 0, 00:22:42.250 "pending_bdev_io": 0, 00:22:42.250 "completed_nvme_io": 0, 00:22:42.250 "transports": [ 00:22:42.250 { 00:22:42.250 "trtype": "TCP" 00:22:42.250 } 00:22:42.250 ] 00:22:42.250 }, 00:22:42.250 { 00:22:42.250 "name": "nvmf_tgt_poll_group_003", 00:22:42.250 "admin_qpairs": 0, 00:22:42.250 "io_qpairs": 0, 00:22:42.250 "current_admin_qpairs": 0, 00:22:42.250 "current_io_qpairs": 0, 00:22:42.250 "pending_bdev_io": 0, 00:22:42.250 "completed_nvme_io": 0, 00:22:42.250 "transports": [ 00:22:42.250 { 00:22:42.250 "trtype": "TCP" 00:22:42.250 } 00:22:42.250 ] 00:22:42.250 } 00:22:42.250 ] 00:22:42.250 }' 00:22:42.250 09:37:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:42.250 09:37:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:42.250 09:37:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:42.250 09:37:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:42.250 09:37:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1209440 00:22:50.395 Initializing NVMe Controllers 00:22:50.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:50.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:50.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:50.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:50.395 Initialization complete. Launching workers. 00:22:50.395 ======================================================== 00:22:50.395 Latency(us) 00:22:50.395 Device Information : IOPS MiB/s Average min max 00:22:50.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11010.95 43.01 5830.58 1346.78 49419.42 00:22:50.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11192.15 43.72 5718.54 1080.87 50797.54 00:22:50.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7900.47 30.86 8127.96 1704.03 53705.16 00:22:50.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6838.77 26.71 9361.48 1955.36 53088.94 00:22:50.395 ======================================================== 00:22:50.395 Total : 36942.34 144.31 6941.59 1080.87 53705.16 00:22:50.395 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.395 rmmod nvme_tcp 00:22:50.395 rmmod nvme_fabrics 00:22:50.395 rmmod nvme_keyring 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1209329 ']' 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1209329 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 1209329 ']' 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 1209329 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1209329 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1209329' 00:22:50.395 killing process with pid 1209329 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 1209329 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 1209329 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.395 09:37:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.310 09:37:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:52.310 09:37:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:52.310 00:22:52.310 real 0m52.244s 00:22:52.310 user 2m49.182s 00:22:52.310 sys 0m11.284s 00:22:52.310 09:37:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:52.310 09:37:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.310 ************************************ 00:22:52.310 END TEST nvmf_perf_adq 00:22:52.310 ************************************ 00:22:52.310 09:37:24 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:52.310 09:37:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:52.310 09:37:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:52.310 09:37:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:52.310 ************************************ 00:22:52.310 START TEST nvmf_shutdown 00:22:52.310 ************************************ 00:22:52.310 09:37:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:52.571 * Looking for test storage... 00:22:52.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:52.572 ************************************ 00:22:52.572 START TEST nvmf_shutdown_tc1 00:22:52.572 ************************************ 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:52.572 09:37:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:00.720 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:00.720 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:00.720 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:00.720 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.720 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:23:00.721 00:23:00.721 --- 10.0.0.2 ping statistics --- 00:23:00.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.721 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:23:00.721 00:23:00.721 --- 10.0.0.1 ping statistics --- 00:23:00.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.721 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1215873 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1215873 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 1215873 ']' 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.721 [2024-06-11 09:37:31.607148] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:23:00.721 [2024-06-11 09:37:31.607200] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.721 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.721 [2024-06-11 09:37:31.675042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.721 [2024-06-11 09:37:31.743071] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.721 [2024-06-11 09:37:31.743106] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.721 [2024-06-11 09:37:31.743114] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.721 [2024-06-11 09:37:31.743120] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.721 [2024-06-11 09:37:31.743126] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.721 [2024-06-11 09:37:31.743233] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.721 [2024-06-11 09:37:31.743392] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.721 [2024-06-11 09:37:31.743550] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.721 [2024-06-11 09:37:31.743551] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.721 [2024-06-11 09:37:31.894157] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.721 09:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.721 Malloc1 00:23:00.721 [2024-06-11 09:37:31.997662] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.721 Malloc2 00:23:00.721 Malloc3 00:23:00.721 Malloc4 00:23:00.721 Malloc5 00:23:00.721 Malloc6 00:23:00.721 Malloc7 00:23:00.721 Malloc8 00:23:00.721 Malloc9 00:23:00.721 Malloc10 00:23:00.721 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.721 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:00.721 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:00.721 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.721 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1215928 00:23:00.721 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1215928 /var/tmp/bdevperf.sock 00:23:00.721 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 1215928 ']' 00:23:00.721 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.721 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:00.721 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:00.721 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.722 { 00:23:00.722 "params": { 00:23:00.722 "name": "Nvme$subsystem", 00:23:00.722 "trtype": "$TEST_TRANSPORT", 00:23:00.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.722 "adrfam": "ipv4", 00:23:00.722 "trsvcid": "$NVMF_PORT", 00:23:00.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.722 "hdgst": ${hdgst:-false}, 00:23:00.722 "ddgst": ${ddgst:-false} 00:23:00.722 }, 00:23:00.722 "method": "bdev_nvme_attach_controller" 00:23:00.722 } 00:23:00.722 EOF 00:23:00.722 )") 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.722 { 00:23:00.722 "params": { 00:23:00.722 "name": "Nvme$subsystem", 00:23:00.722 "trtype": "$TEST_TRANSPORT", 00:23:00.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.722 "adrfam": "ipv4", 00:23:00.722 "trsvcid": "$NVMF_PORT", 00:23:00.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.722 "hdgst": ${hdgst:-false}, 00:23:00.722 "ddgst": ${ddgst:-false} 00:23:00.722 }, 00:23:00.722 "method": "bdev_nvme_attach_controller" 00:23:00.722 } 00:23:00.722 EOF 00:23:00.722 )") 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.722 { 00:23:00.722 "params": { 00:23:00.722 "name": "Nvme$subsystem", 00:23:00.722 "trtype": "$TEST_TRANSPORT", 00:23:00.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.722 "adrfam": "ipv4", 00:23:00.722 "trsvcid": "$NVMF_PORT", 00:23:00.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.722 "hdgst": ${hdgst:-false}, 00:23:00.722 "ddgst": ${ddgst:-false} 00:23:00.722 }, 00:23:00.722 "method": "bdev_nvme_attach_controller" 00:23:00.722 } 00:23:00.722 EOF 00:23:00.722 )") 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.722 { 00:23:00.722 "params": { 00:23:00.722 "name": "Nvme$subsystem", 00:23:00.722 "trtype": "$TEST_TRANSPORT", 00:23:00.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.722 "adrfam": "ipv4", 00:23:00.722 "trsvcid": "$NVMF_PORT", 00:23:00.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.722 "hdgst": ${hdgst:-false}, 00:23:00.722 "ddgst": ${ddgst:-false} 00:23:00.722 }, 00:23:00.722 "method": "bdev_nvme_attach_controller" 00:23:00.722 } 00:23:00.722 EOF 00:23:00.722 )") 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.722 { 00:23:00.722 "params": { 00:23:00.722 "name": "Nvme$subsystem", 00:23:00.722 "trtype": "$TEST_TRANSPORT", 00:23:00.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.722 "adrfam": "ipv4", 00:23:00.722 "trsvcid": "$NVMF_PORT", 00:23:00.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.722 "hdgst": ${hdgst:-false}, 00:23:00.722 "ddgst": ${ddgst:-false} 00:23:00.722 }, 00:23:00.722 "method": "bdev_nvme_attach_controller" 00:23:00.722 } 00:23:00.722 EOF 00:23:00.722 )") 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.722 { 00:23:00.722 "params": { 00:23:00.722 "name": "Nvme$subsystem", 00:23:00.722 "trtype": "$TEST_TRANSPORT", 00:23:00.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.722 "adrfam": "ipv4", 00:23:00.722 "trsvcid": "$NVMF_PORT", 00:23:00.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.722 "hdgst": ${hdgst:-false}, 00:23:00.722 "ddgst": ${ddgst:-false} 00:23:00.722 }, 00:23:00.722 "method": "bdev_nvme_attach_controller" 00:23:00.722 } 00:23:00.722 EOF 00:23:00.722 )") 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.722 [2024-06-11 09:37:32.448501] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:23:00.722 [2024-06-11 09:37:32.448550] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.722 { 00:23:00.722 "params": { 00:23:00.722 "name": "Nvme$subsystem", 00:23:00.722 "trtype": "$TEST_TRANSPORT", 00:23:00.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.722 "adrfam": "ipv4", 00:23:00.722 "trsvcid": "$NVMF_PORT", 00:23:00.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.722 "hdgst": ${hdgst:-false}, 00:23:00.722 "ddgst": ${ddgst:-false} 00:23:00.722 }, 00:23:00.722 "method": "bdev_nvme_attach_controller" 00:23:00.722 } 00:23:00.722 EOF 00:23:00.722 )") 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.722 { 00:23:00.722 "params": { 00:23:00.722 "name": "Nvme$subsystem", 00:23:00.722 "trtype": "$TEST_TRANSPORT", 00:23:00.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.722 "adrfam": "ipv4", 00:23:00.722 "trsvcid": "$NVMF_PORT", 00:23:00.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.722 "hdgst": ${hdgst:-false}, 00:23:00.722 "ddgst": ${ddgst:-false} 00:23:00.722 }, 00:23:00.722 "method": "bdev_nvme_attach_controller" 00:23:00.722 } 00:23:00.722 EOF 00:23:00.722 )") 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.722 { 00:23:00.722 "params": { 00:23:00.722 "name": "Nvme$subsystem", 00:23:00.722 "trtype": "$TEST_TRANSPORT", 00:23:00.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.722 "adrfam": "ipv4", 00:23:00.722 "trsvcid": "$NVMF_PORT", 00:23:00.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.722 "hdgst": ${hdgst:-false}, 00:23:00.722 "ddgst": ${ddgst:-false} 00:23:00.722 }, 00:23:00.722 "method": "bdev_nvme_attach_controller" 00:23:00.722 } 00:23:00.722 EOF 00:23:00.722 )") 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.722 { 00:23:00.722 "params": { 00:23:00.722 "name": "Nvme$subsystem", 00:23:00.722 "trtype": "$TEST_TRANSPORT", 00:23:00.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.722 "adrfam": "ipv4", 00:23:00.722 "trsvcid": "$NVMF_PORT", 00:23:00.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.722 "hdgst": ${hdgst:-false}, 00:23:00.722 "ddgst": ${ddgst:-false} 00:23:00.722 }, 00:23:00.722 "method": "bdev_nvme_attach_controller" 00:23:00.722 } 00:23:00.722 EOF 00:23:00.722 )") 00:23:00.722 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:00.722 09:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:00.722 "params": { 00:23:00.722 "name": "Nvme1", 00:23:00.723 "trtype": "tcp", 00:23:00.723 "traddr": "10.0.0.2", 00:23:00.723 "adrfam": "ipv4", 00:23:00.723 "trsvcid": "4420", 00:23:00.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.723 "hdgst": false, 00:23:00.723 "ddgst": false 00:23:00.723 }, 00:23:00.723 "method": "bdev_nvme_attach_controller" 00:23:00.723 },{ 00:23:00.723 "params": { 00:23:00.723 "name": "Nvme2", 00:23:00.723 "trtype": "tcp", 00:23:00.723 "traddr": "10.0.0.2", 00:23:00.723 "adrfam": "ipv4", 00:23:00.723 "trsvcid": "4420", 00:23:00.723 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.723 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:00.723 "hdgst": false, 00:23:00.723 "ddgst": false 00:23:00.723 }, 00:23:00.723 "method": "bdev_nvme_attach_controller" 00:23:00.723 },{ 00:23:00.723 "params": { 00:23:00.723 "name": "Nvme3", 00:23:00.723 "trtype": "tcp", 00:23:00.723 "traddr": "10.0.0.2", 00:23:00.723 "adrfam": "ipv4", 00:23:00.723 "trsvcid": "4420", 00:23:00.723 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:00.723 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:00.723 "hdgst": false, 00:23:00.723 "ddgst": false 00:23:00.723 }, 00:23:00.723 "method": "bdev_nvme_attach_controller" 00:23:00.723 },{ 00:23:00.723 "params": { 00:23:00.723 "name": "Nvme4", 00:23:00.723 "trtype": "tcp", 00:23:00.723 "traddr": "10.0.0.2", 00:23:00.723 "adrfam": "ipv4", 00:23:00.723 "trsvcid": "4420", 00:23:00.723 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:00.723 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:00.723 "hdgst": false, 00:23:00.723 "ddgst": false 00:23:00.723 }, 00:23:00.723 "method": "bdev_nvme_attach_controller" 00:23:00.723 },{ 00:23:00.723 "params": { 00:23:00.723 "name": "Nvme5", 00:23:00.723 "trtype": "tcp", 00:23:00.723 "traddr": "10.0.0.2", 00:23:00.723 "adrfam": "ipv4", 00:23:00.723 "trsvcid": "4420", 00:23:00.723 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:00.723 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:00.723 "hdgst": false, 00:23:00.723 "ddgst": false 00:23:00.723 }, 00:23:00.723 "method": "bdev_nvme_attach_controller" 00:23:00.723 },{ 00:23:00.723 "params": { 00:23:00.723 "name": "Nvme6", 00:23:00.723 "trtype": "tcp", 00:23:00.723 "traddr": "10.0.0.2", 00:23:00.723 "adrfam": "ipv4", 00:23:00.723 "trsvcid": "4420", 00:23:00.723 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:00.723 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:00.723 "hdgst": false, 00:23:00.723 "ddgst": false 00:23:00.723 }, 00:23:00.723 "method": "bdev_nvme_attach_controller" 00:23:00.723 },{ 00:23:00.723 "params": { 00:23:00.723 "name": "Nvme7", 00:23:00.723 "trtype": "tcp", 00:23:00.723 "traddr": "10.0.0.2", 00:23:00.723 "adrfam": "ipv4", 00:23:00.723 "trsvcid": "4420", 00:23:00.723 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:00.723 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:00.723 "hdgst": false, 00:23:00.723 "ddgst": false 00:23:00.723 }, 00:23:00.723 "method": "bdev_nvme_attach_controller" 00:23:00.723 },{ 00:23:00.723 "params": { 00:23:00.723 "name": "Nvme8", 00:23:00.723 "trtype": "tcp", 00:23:00.723 "traddr": "10.0.0.2", 00:23:00.723 "adrfam": "ipv4", 00:23:00.723 "trsvcid": "4420", 00:23:00.723 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:00.723 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:00.723 "hdgst": false, 00:23:00.723 "ddgst": false 00:23:00.723 }, 00:23:00.723 "method": "bdev_nvme_attach_controller" 00:23:00.723 },{ 00:23:00.723 "params": { 00:23:00.723 "name": "Nvme9", 00:23:00.723 "trtype": "tcp", 00:23:00.723 "traddr": "10.0.0.2", 00:23:00.723 "adrfam": "ipv4", 00:23:00.723 "trsvcid": "4420", 00:23:00.723 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:00.723 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:00.723 "hdgst": false, 00:23:00.723 "ddgst": false 00:23:00.723 }, 00:23:00.723 "method": "bdev_nvme_attach_controller" 00:23:00.723 },{ 00:23:00.723 "params": { 00:23:00.723 "name": "Nvme10", 00:23:00.723 "trtype": "tcp", 00:23:00.723 "traddr": "10.0.0.2", 00:23:00.723 "adrfam": "ipv4", 00:23:00.723 "trsvcid": "4420", 00:23:00.723 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:00.723 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:00.723 "hdgst": false, 00:23:00.723 "ddgst": false 00:23:00.723 }, 00:23:00.723 "method": "bdev_nvme_attach_controller" 00:23:00.723 }' 00:23:00.723 [2024-06-11 09:37:32.527645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.984 [2024-06-11 09:37:32.592701] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.407 09:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:02.407 09:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:23:02.408 09:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:02.408 09:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.408 09:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:02.408 09:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.408 09:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1215928 00:23:02.408 09:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:02.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1215928 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:02.408 09:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1215873 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.351 { 00:23:03.351 "params": { 00:23:03.351 "name": "Nvme$subsystem", 00:23:03.351 "trtype": "$TEST_TRANSPORT", 00:23:03.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.351 "adrfam": "ipv4", 00:23:03.351 "trsvcid": "$NVMF_PORT", 00:23:03.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.351 "hdgst": ${hdgst:-false}, 00:23:03.351 "ddgst": ${ddgst:-false} 00:23:03.351 }, 00:23:03.351 "method": "bdev_nvme_attach_controller" 00:23:03.351 } 00:23:03.351 EOF 00:23:03.351 )") 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.351 { 00:23:03.351 "params": { 00:23:03.351 "name": "Nvme$subsystem", 00:23:03.351 "trtype": "$TEST_TRANSPORT", 00:23:03.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.351 "adrfam": "ipv4", 00:23:03.351 "trsvcid": "$NVMF_PORT", 00:23:03.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.351 "hdgst": ${hdgst:-false}, 00:23:03.351 "ddgst": ${ddgst:-false} 00:23:03.351 }, 00:23:03.351 "method": "bdev_nvme_attach_controller" 00:23:03.351 } 00:23:03.351 EOF 00:23:03.351 )") 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.351 { 00:23:03.351 "params": { 00:23:03.351 "name": "Nvme$subsystem", 00:23:03.351 "trtype": "$TEST_TRANSPORT", 00:23:03.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.351 "adrfam": "ipv4", 00:23:03.351 "trsvcid": "$NVMF_PORT", 00:23:03.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.351 "hdgst": ${hdgst:-false}, 00:23:03.351 "ddgst": ${ddgst:-false} 00:23:03.351 }, 00:23:03.351 "method": "bdev_nvme_attach_controller" 00:23:03.351 } 00:23:03.351 EOF 00:23:03.351 )") 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.351 { 00:23:03.351 "params": { 00:23:03.351 "name": "Nvme$subsystem", 00:23:03.351 "trtype": "$TEST_TRANSPORT", 00:23:03.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.351 "adrfam": "ipv4", 00:23:03.351 "trsvcid": "$NVMF_PORT", 00:23:03.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.351 "hdgst": ${hdgst:-false}, 00:23:03.351 "ddgst": ${ddgst:-false} 00:23:03.351 }, 00:23:03.351 "method": "bdev_nvme_attach_controller" 00:23:03.351 } 00:23:03.351 EOF 00:23:03.351 )") 00:23:03.351 09:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.351 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.351 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.351 { 00:23:03.351 "params": { 00:23:03.351 "name": "Nvme$subsystem", 00:23:03.351 "trtype": "$TEST_TRANSPORT", 00:23:03.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.351 "adrfam": "ipv4", 00:23:03.351 "trsvcid": "$NVMF_PORT", 00:23:03.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.351 "hdgst": ${hdgst:-false}, 00:23:03.351 "ddgst": ${ddgst:-false} 00:23:03.351 }, 00:23:03.351 "method": "bdev_nvme_attach_controller" 00:23:03.351 } 00:23:03.351 EOF 00:23:03.351 )") 00:23:03.351 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.351 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.351 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.351 { 00:23:03.351 "params": { 00:23:03.351 "name": "Nvme$subsystem", 00:23:03.351 "trtype": "$TEST_TRANSPORT", 00:23:03.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.351 "adrfam": "ipv4", 00:23:03.351 "trsvcid": "$NVMF_PORT", 00:23:03.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.351 "hdgst": ${hdgst:-false}, 00:23:03.351 "ddgst": ${ddgst:-false} 00:23:03.351 }, 00:23:03.351 "method": "bdev_nvme_attach_controller" 00:23:03.351 } 00:23:03.351 EOF 00:23:03.351 )") 00:23:03.351 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.351 [2024-06-11 09:37:35.016343] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:23:03.351 [2024-06-11 09:37:35.016392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216613 ] 00:23:03.351 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.351 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.351 { 00:23:03.351 "params": { 00:23:03.351 "name": "Nvme$subsystem", 00:23:03.351 "trtype": "$TEST_TRANSPORT", 00:23:03.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.351 "adrfam": "ipv4", 00:23:03.351 "trsvcid": "$NVMF_PORT", 00:23:03.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.351 "hdgst": ${hdgst:-false}, 00:23:03.351 "ddgst": ${ddgst:-false} 00:23:03.351 }, 00:23:03.351 "method": "bdev_nvme_attach_controller" 00:23:03.351 } 00:23:03.351 EOF 00:23:03.351 )") 00:23:03.351 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.351 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.351 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.351 { 00:23:03.351 "params": { 00:23:03.351 "name": "Nvme$subsystem", 00:23:03.351 "trtype": "$TEST_TRANSPORT", 00:23:03.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.351 "adrfam": "ipv4", 00:23:03.351 "trsvcid": "$NVMF_PORT", 00:23:03.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.351 "hdgst": ${hdgst:-false}, 00:23:03.351 "ddgst": ${ddgst:-false} 00:23:03.351 }, 00:23:03.351 "method": "bdev_nvme_attach_controller" 00:23:03.351 } 00:23:03.351 EOF 00:23:03.351 )") 00:23:03.351 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.351 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.352 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.352 { 00:23:03.352 "params": { 00:23:03.352 "name": "Nvme$subsystem", 00:23:03.352 "trtype": "$TEST_TRANSPORT", 00:23:03.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.352 "adrfam": "ipv4", 00:23:03.352 "trsvcid": "$NVMF_PORT", 00:23:03.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.352 "hdgst": ${hdgst:-false}, 00:23:03.352 "ddgst": ${ddgst:-false} 00:23:03.352 }, 00:23:03.352 "method": "bdev_nvme_attach_controller" 00:23:03.352 } 00:23:03.352 EOF 00:23:03.352 )") 00:23:03.352 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.352 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.352 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.352 { 00:23:03.352 "params": { 00:23:03.352 "name": "Nvme$subsystem", 00:23:03.352 "trtype": "$TEST_TRANSPORT", 00:23:03.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.352 "adrfam": "ipv4", 00:23:03.352 "trsvcid": "$NVMF_PORT", 00:23:03.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.352 "hdgst": ${hdgst:-false}, 00:23:03.352 "ddgst": ${ddgst:-false} 00:23:03.352 }, 00:23:03.352 "method": "bdev_nvme_attach_controller" 00:23:03.352 } 00:23:03.352 EOF 00:23:03.352 )") 00:23:03.352 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.352 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.352 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:03.352 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:03.352 09:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:03.352 "params": { 00:23:03.352 "name": "Nvme1", 00:23:03.352 "trtype": "tcp", 00:23:03.352 "traddr": "10.0.0.2", 00:23:03.352 "adrfam": "ipv4", 00:23:03.352 "trsvcid": "4420", 00:23:03.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:03.352 "hdgst": false, 00:23:03.352 "ddgst": false 00:23:03.352 }, 00:23:03.352 "method": "bdev_nvme_attach_controller" 00:23:03.352 },{ 00:23:03.352 "params": { 00:23:03.352 "name": "Nvme2", 00:23:03.352 "trtype": "tcp", 00:23:03.352 "traddr": "10.0.0.2", 00:23:03.352 "adrfam": "ipv4", 00:23:03.352 "trsvcid": "4420", 00:23:03.352 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:03.352 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:03.352 "hdgst": false, 00:23:03.352 "ddgst": false 00:23:03.352 }, 00:23:03.352 "method": "bdev_nvme_attach_controller" 00:23:03.352 },{ 00:23:03.352 "params": { 00:23:03.352 "name": "Nvme3", 00:23:03.352 "trtype": "tcp", 00:23:03.352 "traddr": "10.0.0.2", 00:23:03.352 "adrfam": "ipv4", 00:23:03.352 "trsvcid": "4420", 00:23:03.352 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:03.352 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:03.352 "hdgst": false, 00:23:03.352 "ddgst": false 00:23:03.352 }, 00:23:03.352 "method": "bdev_nvme_attach_controller" 00:23:03.352 },{ 00:23:03.352 "params": { 00:23:03.352 "name": "Nvme4", 00:23:03.352 "trtype": "tcp", 00:23:03.352 "traddr": "10.0.0.2", 00:23:03.352 "adrfam": "ipv4", 00:23:03.352 "trsvcid": "4420", 00:23:03.352 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:03.352 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:03.352 "hdgst": false, 00:23:03.352 "ddgst": false 00:23:03.352 }, 00:23:03.352 "method": "bdev_nvme_attach_controller" 00:23:03.352 },{ 00:23:03.352 "params": { 00:23:03.352 "name": "Nvme5", 00:23:03.352 "trtype": "tcp", 00:23:03.352 "traddr": "10.0.0.2", 00:23:03.352 "adrfam": "ipv4", 00:23:03.352 "trsvcid": "4420", 00:23:03.352 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:03.352 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:03.352 "hdgst": false, 00:23:03.352 "ddgst": false 00:23:03.352 }, 00:23:03.352 "method": "bdev_nvme_attach_controller" 00:23:03.352 },{ 00:23:03.352 "params": { 00:23:03.352 "name": "Nvme6", 00:23:03.352 "trtype": "tcp", 00:23:03.352 "traddr": "10.0.0.2", 00:23:03.352 "adrfam": "ipv4", 00:23:03.352 "trsvcid": "4420", 00:23:03.352 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:03.352 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:03.352 "hdgst": false, 00:23:03.352 "ddgst": false 00:23:03.352 }, 00:23:03.352 "method": "bdev_nvme_attach_controller" 00:23:03.352 },{ 00:23:03.352 "params": { 00:23:03.352 "name": "Nvme7", 00:23:03.352 "trtype": "tcp", 00:23:03.352 "traddr": "10.0.0.2", 00:23:03.352 "adrfam": "ipv4", 00:23:03.352 "trsvcid": "4420", 00:23:03.352 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:03.352 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:03.352 "hdgst": false, 00:23:03.352 "ddgst": false 00:23:03.352 }, 00:23:03.352 "method": "bdev_nvme_attach_controller" 00:23:03.352 },{ 00:23:03.352 "params": { 00:23:03.352 "name": "Nvme8", 00:23:03.352 "trtype": "tcp", 00:23:03.352 "traddr": "10.0.0.2", 00:23:03.352 "adrfam": "ipv4", 00:23:03.352 "trsvcid": "4420", 00:23:03.352 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:03.352 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:03.352 "hdgst": false, 00:23:03.352 "ddgst": false 00:23:03.352 }, 00:23:03.352 "method": "bdev_nvme_attach_controller" 00:23:03.352 },{ 00:23:03.352 "params": { 00:23:03.352 "name": "Nvme9", 00:23:03.352 "trtype": "tcp", 00:23:03.352 "traddr": "10.0.0.2", 00:23:03.352 "adrfam": "ipv4", 00:23:03.352 "trsvcid": "4420", 00:23:03.352 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:03.352 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:03.352 "hdgst": false, 00:23:03.352 "ddgst": false 00:23:03.352 }, 00:23:03.352 "method": "bdev_nvme_attach_controller" 00:23:03.352 },{ 00:23:03.352 "params": { 00:23:03.352 "name": "Nvme10", 00:23:03.352 "trtype": "tcp", 00:23:03.352 "traddr": "10.0.0.2", 00:23:03.352 "adrfam": "ipv4", 00:23:03.352 "trsvcid": "4420", 00:23:03.352 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:03.352 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:03.352 "hdgst": false, 00:23:03.352 "ddgst": false 00:23:03.352 }, 00:23:03.352 "method": "bdev_nvme_attach_controller" 00:23:03.352 }' 00:23:03.352 [2024-06-11 09:37:35.091482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.352 [2024-06-11 09:37:35.155541] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.266 Running I/O for 1 seconds... 00:23:06.207 00:23:06.207 Latency(us) 00:23:06.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.207 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.207 Verification LBA range: start 0x0 length 0x400 00:23:06.207 Nvme1n1 : 1.12 229.49 14.34 0.00 0.00 275733.33 22719.15 255153.49 00:23:06.207 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.207 Verification LBA range: start 0x0 length 0x400 00:23:06.207 Nvme2n1 : 1.12 227.80 14.24 0.00 0.00 272963.20 20862.29 248162.99 00:23:06.207 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.207 Verification LBA range: start 0x0 length 0x400 00:23:06.207 Nvme3n1 : 1.11 230.12 14.38 0.00 0.00 265065.39 22500.69 242920.11 00:23:06.207 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.207 Verification LBA range: start 0x0 length 0x400 00:23:06.207 Nvme4n1 : 1.12 228.34 14.27 0.00 0.00 262812.59 17257.81 249910.61 00:23:06.207 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.207 Verification LBA range: start 0x0 length 0x400 00:23:06.207 Nvme5n1 : 1.15 222.21 13.89 0.00 0.00 265320.32 20425.39 255153.49 00:23:06.207 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.207 Verification LBA range: start 0x0 length 0x400 00:23:06.207 Nvme6n1 : 1.17 219.54 13.72 0.00 0.00 263404.80 21736.11 274377.39 00:23:06.207 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.207 Verification LBA range: start 0x0 length 0x400 00:23:06.207 Nvme7n1 : 1.16 276.26 17.27 0.00 0.00 205865.13 16493.23 255153.49 00:23:06.207 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.207 Verification LBA range: start 0x0 length 0x400 00:23:06.207 Nvme8n1 : 1.19 269.84 16.86 0.00 0.00 207248.38 11359.57 249910.61 00:23:06.207 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.207 Verification LBA range: start 0x0 length 0x400 00:23:06.207 Nvme9n1 : 1.16 275.50 17.22 0.00 0.00 198862.68 17803.95 248162.99 00:23:06.207 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:06.207 Verification LBA range: start 0x0 length 0x400 00:23:06.207 Nvme10n1 : 1.19 225.77 14.11 0.00 0.00 237590.52 4014.08 274377.39 00:23:06.207 =================================================================================================================== 00:23:06.207 Total : 2404.87 150.30 0.00 0.00 242569.95 4014.08 274377.39 00:23:06.207 09:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:06.207 09:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:06.207 09:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:06.207 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:06.207 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:06.207 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:06.207 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:06.207 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:06.207 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:06.207 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:06.207 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:06.207 rmmod nvme_tcp 00:23:06.466 rmmod nvme_fabrics 00:23:06.466 rmmod nvme_keyring 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1215873 ']' 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1215873 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 1215873 ']' 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 1215873 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1215873 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1215873' 00:23:06.466 killing process with pid 1215873 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 1215873 00:23:06.466 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 1215873 00:23:06.726 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:06.726 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:06.726 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:06.726 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:06.726 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:06.726 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.726 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.726 09:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:09.271 00:23:09.271 real 0m16.179s 00:23:09.271 user 0m32.679s 00:23:09.271 sys 0m6.522s 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.271 ************************************ 00:23:09.271 END TEST nvmf_shutdown_tc1 00:23:09.271 ************************************ 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:09.271 ************************************ 00:23:09.271 START TEST nvmf_shutdown_tc2 00:23:09.271 ************************************ 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:09.271 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:09.272 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:09.272 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:09.272 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:09.272 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:09.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:23:09.272 00:23:09.272 --- 10.0.0.2 ping statistics --- 00:23:09.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.272 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.382 ms 00:23:09.272 00:23:09.272 --- 10.0.0.1 ping statistics --- 00:23:09.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.272 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1217747 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1217747 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1217747 ']' 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:09.272 09:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.272 [2024-06-11 09:37:40.977307] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:23:09.272 [2024-06-11 09:37:40.977365] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.272 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.273 [2024-06-11 09:37:41.044204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.533 [2024-06-11 09:37:41.109401] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.533 [2024-06-11 09:37:41.109435] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.533 [2024-06-11 09:37:41.109443] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.533 [2024-06-11 09:37:41.109449] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.533 [2024-06-11 09:37:41.109455] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.533 [2024-06-11 09:37:41.109560] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.533 [2024-06-11 09:37:41.109716] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.533 [2024-06-11 09:37:41.109869] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.533 [2024-06-11 09:37:41.109870] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:23:09.533 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.534 [2024-06-11 09:37:41.249147] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.534 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.534 Malloc1 00:23:09.794 [2024-06-11 09:37:41.349875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.794 Malloc2 00:23:09.794 Malloc3 00:23:09.794 Malloc4 00:23:09.794 Malloc5 00:23:09.794 Malloc6 00:23:09.794 Malloc7 00:23:09.794 Malloc8 00:23:10.056 Malloc9 00:23:10.056 Malloc10 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1218009 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1218009 /var/tmp/bdevperf.sock 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1218009 ']' 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.056 { 00:23:10.056 "params": { 00:23:10.056 "name": "Nvme$subsystem", 00:23:10.056 "trtype": "$TEST_TRANSPORT", 00:23:10.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.056 "adrfam": "ipv4", 00:23:10.056 "trsvcid": "$NVMF_PORT", 00:23:10.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.056 "hdgst": ${hdgst:-false}, 00:23:10.056 "ddgst": ${ddgst:-false} 00:23:10.056 }, 00:23:10.056 "method": "bdev_nvme_attach_controller" 00:23:10.056 } 00:23:10.056 EOF 00:23:10.056 )") 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.056 { 00:23:10.056 "params": { 00:23:10.056 "name": "Nvme$subsystem", 00:23:10.056 "trtype": "$TEST_TRANSPORT", 00:23:10.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.056 "adrfam": "ipv4", 00:23:10.056 "trsvcid": "$NVMF_PORT", 00:23:10.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.056 "hdgst": ${hdgst:-false}, 00:23:10.056 "ddgst": ${ddgst:-false} 00:23:10.056 }, 00:23:10.056 "method": "bdev_nvme_attach_controller" 00:23:10.056 } 00:23:10.056 EOF 00:23:10.056 )") 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.056 { 00:23:10.056 "params": { 00:23:10.056 "name": "Nvme$subsystem", 00:23:10.056 "trtype": "$TEST_TRANSPORT", 00:23:10.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.056 "adrfam": "ipv4", 00:23:10.056 "trsvcid": "$NVMF_PORT", 00:23:10.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.056 "hdgst": ${hdgst:-false}, 00:23:10.056 "ddgst": ${ddgst:-false} 00:23:10.056 }, 00:23:10.056 "method": "bdev_nvme_attach_controller" 00:23:10.056 } 00:23:10.056 EOF 00:23:10.056 )") 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.056 { 00:23:10.056 "params": { 00:23:10.056 "name": "Nvme$subsystem", 00:23:10.056 "trtype": "$TEST_TRANSPORT", 00:23:10.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.056 "adrfam": "ipv4", 00:23:10.056 "trsvcid": "$NVMF_PORT", 00:23:10.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.056 "hdgst": ${hdgst:-false}, 00:23:10.056 "ddgst": ${ddgst:-false} 00:23:10.056 }, 00:23:10.056 "method": "bdev_nvme_attach_controller" 00:23:10.056 } 00:23:10.056 EOF 00:23:10.056 )") 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.056 { 00:23:10.056 "params": { 00:23:10.056 "name": "Nvme$subsystem", 00:23:10.056 "trtype": "$TEST_TRANSPORT", 00:23:10.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.056 "adrfam": "ipv4", 00:23:10.056 "trsvcid": "$NVMF_PORT", 00:23:10.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.056 "hdgst": ${hdgst:-false}, 00:23:10.056 "ddgst": ${ddgst:-false} 00:23:10.056 }, 00:23:10.056 "method": "bdev_nvme_attach_controller" 00:23:10.056 } 00:23:10.056 EOF 00:23:10.056 )") 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.056 { 00:23:10.056 "params": { 00:23:10.056 "name": "Nvme$subsystem", 00:23:10.056 "trtype": "$TEST_TRANSPORT", 00:23:10.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.056 "adrfam": "ipv4", 00:23:10.056 "trsvcid": "$NVMF_PORT", 00:23:10.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.056 "hdgst": ${hdgst:-false}, 00:23:10.056 "ddgst": ${ddgst:-false} 00:23:10.056 }, 00:23:10.056 "method": "bdev_nvme_attach_controller" 00:23:10.056 } 00:23:10.056 EOF 00:23:10.056 )") 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.056 { 00:23:10.056 "params": { 00:23:10.056 "name": "Nvme$subsystem", 00:23:10.056 "trtype": "$TEST_TRANSPORT", 00:23:10.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.056 "adrfam": "ipv4", 00:23:10.056 "trsvcid": "$NVMF_PORT", 00:23:10.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.056 "hdgst": ${hdgst:-false}, 00:23:10.056 "ddgst": ${ddgst:-false} 00:23:10.056 }, 00:23:10.056 "method": "bdev_nvme_attach_controller" 00:23:10.056 } 00:23:10.056 EOF 00:23:10.056 )") 00:23:10.056 [2024-06-11 09:37:41.799375] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:23:10.056 [2024-06-11 09:37:41.799426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218009 ] 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.056 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.056 { 00:23:10.056 "params": { 00:23:10.056 "name": "Nvme$subsystem", 00:23:10.056 "trtype": "$TEST_TRANSPORT", 00:23:10.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.056 "adrfam": "ipv4", 00:23:10.056 "trsvcid": "$NVMF_PORT", 00:23:10.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.057 "hdgst": ${hdgst:-false}, 00:23:10.057 "ddgst": ${ddgst:-false} 00:23:10.057 }, 00:23:10.057 "method": "bdev_nvme_attach_controller" 00:23:10.057 } 00:23:10.057 EOF 00:23:10.057 )") 00:23:10.057 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.057 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.057 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.057 { 00:23:10.057 "params": { 00:23:10.057 "name": "Nvme$subsystem", 00:23:10.057 "trtype": "$TEST_TRANSPORT", 00:23:10.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.057 "adrfam": "ipv4", 00:23:10.057 "trsvcid": "$NVMF_PORT", 00:23:10.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.057 "hdgst": ${hdgst:-false}, 00:23:10.057 "ddgst": ${ddgst:-false} 00:23:10.057 }, 00:23:10.057 "method": "bdev_nvme_attach_controller" 00:23:10.057 } 00:23:10.057 EOF 00:23:10.057 )") 00:23:10.057 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.057 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.057 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.057 { 00:23:10.057 "params": { 00:23:10.057 "name": "Nvme$subsystem", 00:23:10.057 "trtype": "$TEST_TRANSPORT", 00:23:10.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.057 "adrfam": "ipv4", 00:23:10.057 "trsvcid": "$NVMF_PORT", 00:23:10.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.057 "hdgst": ${hdgst:-false}, 00:23:10.057 "ddgst": ${ddgst:-false} 00:23:10.057 }, 00:23:10.057 "method": "bdev_nvme_attach_controller" 00:23:10.057 } 00:23:10.057 EOF 00:23:10.057 )") 00:23:10.057 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:10.057 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.057 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:10.057 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:10.057 09:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:10.057 "params": { 00:23:10.057 "name": "Nvme1", 00:23:10.057 "trtype": "tcp", 00:23:10.057 "traddr": "10.0.0.2", 00:23:10.057 "adrfam": "ipv4", 00:23:10.057 "trsvcid": "4420", 00:23:10.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.057 "hdgst": false, 00:23:10.057 "ddgst": false 00:23:10.057 }, 00:23:10.057 "method": "bdev_nvme_attach_controller" 00:23:10.057 },{ 00:23:10.057 "params": { 00:23:10.057 "name": "Nvme2", 00:23:10.057 "trtype": "tcp", 00:23:10.057 "traddr": "10.0.0.2", 00:23:10.057 "adrfam": "ipv4", 00:23:10.057 "trsvcid": "4420", 00:23:10.057 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:10.057 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:10.057 "hdgst": false, 00:23:10.057 "ddgst": false 00:23:10.057 }, 00:23:10.057 "method": "bdev_nvme_attach_controller" 00:23:10.057 },{ 00:23:10.057 "params": { 00:23:10.057 "name": "Nvme3", 00:23:10.057 "trtype": "tcp", 00:23:10.057 "traddr": "10.0.0.2", 00:23:10.057 "adrfam": "ipv4", 00:23:10.057 "trsvcid": "4420", 00:23:10.057 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:10.057 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:10.057 "hdgst": false, 00:23:10.057 "ddgst": false 00:23:10.057 }, 00:23:10.057 "method": "bdev_nvme_attach_controller" 00:23:10.057 },{ 00:23:10.057 "params": { 00:23:10.057 "name": "Nvme4", 00:23:10.057 "trtype": "tcp", 00:23:10.057 "traddr": "10.0.0.2", 00:23:10.057 "adrfam": "ipv4", 00:23:10.057 "trsvcid": "4420", 00:23:10.057 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:10.057 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:10.057 "hdgst": false, 00:23:10.057 "ddgst": false 00:23:10.057 }, 00:23:10.057 "method": "bdev_nvme_attach_controller" 00:23:10.057 },{ 00:23:10.057 "params": { 00:23:10.057 "name": "Nvme5", 00:23:10.057 "trtype": "tcp", 00:23:10.057 "traddr": "10.0.0.2", 00:23:10.057 "adrfam": "ipv4", 00:23:10.057 "trsvcid": "4420", 00:23:10.057 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:10.057 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:10.057 "hdgst": false, 00:23:10.057 "ddgst": false 00:23:10.057 }, 00:23:10.057 "method": "bdev_nvme_attach_controller" 00:23:10.057 },{ 00:23:10.057 "params": { 00:23:10.057 "name": "Nvme6", 00:23:10.057 "trtype": "tcp", 00:23:10.057 "traddr": "10.0.0.2", 00:23:10.057 "adrfam": "ipv4", 00:23:10.057 "trsvcid": "4420", 00:23:10.057 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:10.057 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:10.057 "hdgst": false, 00:23:10.057 "ddgst": false 00:23:10.057 }, 00:23:10.057 "method": "bdev_nvme_attach_controller" 00:23:10.057 },{ 00:23:10.057 "params": { 00:23:10.057 "name": "Nvme7", 00:23:10.057 "trtype": "tcp", 00:23:10.057 "traddr": "10.0.0.2", 00:23:10.057 "adrfam": "ipv4", 00:23:10.057 "trsvcid": "4420", 00:23:10.057 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:10.057 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:10.057 "hdgst": false, 00:23:10.057 "ddgst": false 00:23:10.057 }, 00:23:10.057 "method": "bdev_nvme_attach_controller" 00:23:10.057 },{ 00:23:10.057 "params": { 00:23:10.057 "name": "Nvme8", 00:23:10.057 "trtype": "tcp", 00:23:10.057 "traddr": "10.0.0.2", 00:23:10.057 "adrfam": "ipv4", 00:23:10.057 "trsvcid": "4420", 00:23:10.057 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:10.057 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:10.057 "hdgst": false, 00:23:10.057 "ddgst": false 00:23:10.057 }, 00:23:10.057 "method": "bdev_nvme_attach_controller" 00:23:10.057 },{ 00:23:10.057 "params": { 00:23:10.057 "name": "Nvme9", 00:23:10.057 "trtype": "tcp", 00:23:10.057 "traddr": "10.0.0.2", 00:23:10.057 "adrfam": "ipv4", 00:23:10.057 "trsvcid": "4420", 00:23:10.057 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:10.057 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:10.057 "hdgst": false, 00:23:10.057 "ddgst": false 00:23:10.057 }, 00:23:10.057 "method": "bdev_nvme_attach_controller" 00:23:10.057 },{ 00:23:10.057 "params": { 00:23:10.057 "name": "Nvme10", 00:23:10.057 "trtype": "tcp", 00:23:10.057 "traddr": "10.0.0.2", 00:23:10.057 "adrfam": "ipv4", 00:23:10.057 "trsvcid": "4420", 00:23:10.057 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:10.057 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:10.057 "hdgst": false, 00:23:10.057 "ddgst": false 00:23:10.057 }, 00:23:10.057 "method": "bdev_nvme_attach_controller" 00:23:10.057 }' 00:23:10.318 [2024-06-11 09:37:41.875155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.318 [2024-06-11 09:37:41.940702] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.228 Running I/O for 10 seconds... 00:23:12.228 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:12.228 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:12.229 09:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:12.489 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:12.489 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:12.489 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:12.489 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:12.489 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.489 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.489 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.489 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:12.489 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:12.489 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1218009 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 1218009 ']' 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 1218009 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1218009 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1218009' 00:23:12.749 killing process with pid 1218009 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 1218009 00:23:12.749 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 1218009 00:23:12.749 Received shutdown signal, test time was about 0.960838 seconds 00:23:12.749 00:23:12.749 Latency(us) 00:23:12.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.749 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.749 Verification LBA range: start 0x0 length 0x400 00:23:12.750 Nvme1n1 : 0.95 203.01 12.69 0.00 0.00 311306.24 30583.47 368749.23 00:23:12.750 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.750 Verification LBA range: start 0x0 length 0x400 00:23:12.750 Nvme2n1 : 0.96 200.58 12.54 0.00 0.00 308981.19 19770.03 332049.07 00:23:12.750 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.750 Verification LBA range: start 0x0 length 0x400 00:23:12.750 Nvme3n1 : 0.92 207.96 13.00 0.00 0.00 290921.24 39103.15 293601.28 00:23:12.750 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.750 Verification LBA range: start 0x0 length 0x400 00:23:12.750 Nvme4n1 : 0.93 206.81 12.93 0.00 0.00 286739.06 23156.05 293601.28 00:23:12.750 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.750 Verification LBA range: start 0x0 length 0x400 00:23:12.750 Nvme5n1 : 0.95 202.01 12.63 0.00 0.00 287311.08 36263.25 274377.39 00:23:12.750 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.750 Verification LBA range: start 0x0 length 0x400 00:23:12.750 Nvme6n1 : 0.95 202.53 12.66 0.00 0.00 280500.34 18131.63 297096.53 00:23:12.750 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.750 Verification LBA range: start 0x0 length 0x400 00:23:12.750 Nvme7n1 : 0.94 204.59 12.79 0.00 0.00 270746.74 27962.03 276125.01 00:23:12.750 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.750 Verification LBA range: start 0x0 length 0x400 00:23:12.750 Nvme8n1 : 0.94 205.27 12.83 0.00 0.00 263750.54 26760.53 321563.31 00:23:12.750 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.750 Verification LBA range: start 0x0 length 0x400 00:23:12.750 Nvme9n1 : 0.96 200.01 12.50 0.00 0.00 265715.77 27852.80 354768.21 00:23:12.750 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:12.750 Verification LBA range: start 0x0 length 0x400 00:23:12.750 Nvme10n1 : 0.93 137.67 8.60 0.00 0.00 372706.99 36918.61 373992.11 00:23:12.750 =================================================================================================================== 00:23:12.750 Total : 1970.44 123.15 0.00 0.00 291149.33 18131.63 373992.11 00:23:13.010 09:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1217747 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:13.952 rmmod nvme_tcp 00:23:13.952 rmmod nvme_fabrics 00:23:13.952 rmmod nvme_keyring 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1217747 ']' 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1217747 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 1217747 ']' 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 1217747 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:23:13.952 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:14.213 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1217747 00:23:14.213 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:14.213 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:14.213 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1217747' 00:23:14.213 killing process with pid 1217747 00:23:14.213 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 1217747 00:23:14.213 09:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 1217747 00:23:14.473 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.474 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.474 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.474 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.474 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.474 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.474 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.474 09:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.388 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:16.388 00:23:16.388 real 0m7.600s 00:23:16.388 user 0m22.817s 00:23:16.388 sys 0m1.230s 00:23:16.388 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:16.388 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:16.388 ************************************ 00:23:16.388 END TEST nvmf_shutdown_tc2 00:23:16.388 ************************************ 00:23:16.388 09:37:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:16.388 09:37:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:23:16.388 09:37:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:16.388 09:37:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:16.650 ************************************ 00:23:16.650 START TEST nvmf_shutdown_tc3 00:23:16.650 ************************************ 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:16.650 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:16.650 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:16.650 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.650 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:16.651 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:16.651 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:16.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:23:16.912 00:23:16.912 --- 10.0.0.2 ping statistics --- 00:23:16.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.912 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:23:16.912 00:23:16.912 --- 10.0.0.1 ping statistics --- 00:23:16.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.912 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1219376 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1219376 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 1219376 ']' 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:16.912 09:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:16.912 [2024-06-11 09:37:48.643861] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:23:16.912 [2024-06-11 09:37:48.643916] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.912 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.912 [2024-06-11 09:37:48.712065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.173 [2024-06-11 09:37:48.778175] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.173 [2024-06-11 09:37:48.778209] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.173 [2024-06-11 09:37:48.778221] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.173 [2024-06-11 09:37:48.778227] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.173 [2024-06-11 09:37:48.778232] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.173 [2024-06-11 09:37:48.778377] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.173 [2024-06-11 09:37:48.778528] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.173 [2024-06-11 09:37:48.778545] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:23:17.173 [2024-06-11 09:37:48.778553] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.748 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:17.748 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:23:17.748 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.748 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:17.748 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.010 [2024-06-11 09:37:49.572258] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.010 09:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.010 Malloc1 00:23:18.010 [2024-06-11 09:37:49.675718] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.010 Malloc2 00:23:18.010 Malloc3 00:23:18.010 Malloc4 00:23:18.010 Malloc5 00:23:18.271 Malloc6 00:23:18.271 Malloc7 00:23:18.271 Malloc8 00:23:18.271 Malloc9 00:23:18.271 Malloc10 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1219658 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1219658 /var/tmp/bdevperf.sock 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 1219658 ']' 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.271 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.271 { 00:23:18.271 "params": { 00:23:18.271 "name": "Nvme$subsystem", 00:23:18.271 "trtype": "$TEST_TRANSPORT", 00:23:18.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.271 "adrfam": "ipv4", 00:23:18.271 "trsvcid": "$NVMF_PORT", 00:23:18.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.271 "hdgst": ${hdgst:-false}, 00:23:18.271 "ddgst": ${ddgst:-false} 00:23:18.271 }, 00:23:18.271 "method": "bdev_nvme_attach_controller" 00:23:18.271 } 00:23:18.271 EOF 00:23:18.271 )") 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.533 { 00:23:18.533 "params": { 00:23:18.533 "name": "Nvme$subsystem", 00:23:18.533 "trtype": "$TEST_TRANSPORT", 00:23:18.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.533 "adrfam": "ipv4", 00:23:18.533 "trsvcid": "$NVMF_PORT", 00:23:18.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.533 "hdgst": ${hdgst:-false}, 00:23:18.533 "ddgst": ${ddgst:-false} 00:23:18.533 }, 00:23:18.533 "method": "bdev_nvme_attach_controller" 00:23:18.533 } 00:23:18.533 EOF 00:23:18.533 )") 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.533 { 00:23:18.533 "params": { 00:23:18.533 "name": "Nvme$subsystem", 00:23:18.533 "trtype": "$TEST_TRANSPORT", 00:23:18.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.533 "adrfam": "ipv4", 00:23:18.533 "trsvcid": "$NVMF_PORT", 00:23:18.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.533 "hdgst": ${hdgst:-false}, 00:23:18.533 "ddgst": ${ddgst:-false} 00:23:18.533 }, 00:23:18.533 "method": "bdev_nvme_attach_controller" 00:23:18.533 } 00:23:18.533 EOF 00:23:18.533 )") 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.533 { 00:23:18.533 "params": { 00:23:18.533 "name": "Nvme$subsystem", 00:23:18.533 "trtype": "$TEST_TRANSPORT", 00:23:18.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.533 "adrfam": "ipv4", 00:23:18.533 "trsvcid": "$NVMF_PORT", 00:23:18.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.533 "hdgst": ${hdgst:-false}, 00:23:18.533 "ddgst": ${ddgst:-false} 00:23:18.533 }, 00:23:18.533 "method": "bdev_nvme_attach_controller" 00:23:18.533 } 00:23:18.533 EOF 00:23:18.533 )") 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.533 { 00:23:18.533 "params": { 00:23:18.533 "name": "Nvme$subsystem", 00:23:18.533 "trtype": "$TEST_TRANSPORT", 00:23:18.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.533 "adrfam": "ipv4", 00:23:18.533 "trsvcid": "$NVMF_PORT", 00:23:18.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.533 "hdgst": ${hdgst:-false}, 00:23:18.533 "ddgst": ${ddgst:-false} 00:23:18.533 }, 00:23:18.533 "method": "bdev_nvme_attach_controller" 00:23:18.533 } 00:23:18.533 EOF 00:23:18.533 )") 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.533 { 00:23:18.533 "params": { 00:23:18.533 "name": "Nvme$subsystem", 00:23:18.533 "trtype": "$TEST_TRANSPORT", 00:23:18.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.533 "adrfam": "ipv4", 00:23:18.533 "trsvcid": "$NVMF_PORT", 00:23:18.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.533 "hdgst": ${hdgst:-false}, 00:23:18.533 "ddgst": ${ddgst:-false} 00:23:18.533 }, 00:23:18.533 "method": "bdev_nvme_attach_controller" 00:23:18.533 } 00:23:18.533 EOF 00:23:18.533 )") 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.533 [2024-06-11 09:37:50.127729] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:23:18.533 [2024-06-11 09:37:50.127781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219658 ] 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.533 { 00:23:18.533 "params": { 00:23:18.533 "name": "Nvme$subsystem", 00:23:18.533 "trtype": "$TEST_TRANSPORT", 00:23:18.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.533 "adrfam": "ipv4", 00:23:18.533 "trsvcid": "$NVMF_PORT", 00:23:18.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.533 "hdgst": ${hdgst:-false}, 00:23:18.533 "ddgst": ${ddgst:-false} 00:23:18.533 }, 00:23:18.533 "method": "bdev_nvme_attach_controller" 00:23:18.533 } 00:23:18.533 EOF 00:23:18.533 )") 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.533 { 00:23:18.533 "params": { 00:23:18.533 "name": "Nvme$subsystem", 00:23:18.533 "trtype": "$TEST_TRANSPORT", 00:23:18.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.533 "adrfam": "ipv4", 00:23:18.533 "trsvcid": "$NVMF_PORT", 00:23:18.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.533 "hdgst": ${hdgst:-false}, 00:23:18.533 "ddgst": ${ddgst:-false} 00:23:18.533 }, 00:23:18.533 "method": "bdev_nvme_attach_controller" 00:23:18.533 } 00:23:18.533 EOF 00:23:18.533 )") 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.533 { 00:23:18.533 "params": { 00:23:18.533 "name": "Nvme$subsystem", 00:23:18.533 "trtype": "$TEST_TRANSPORT", 00:23:18.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.533 "adrfam": "ipv4", 00:23:18.533 "trsvcid": "$NVMF_PORT", 00:23:18.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.533 "hdgst": ${hdgst:-false}, 00:23:18.533 "ddgst": ${ddgst:-false} 00:23:18.533 }, 00:23:18.533 "method": "bdev_nvme_attach_controller" 00:23:18.533 } 00:23:18.533 EOF 00:23:18.533 )") 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.533 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.534 { 00:23:18.534 "params": { 00:23:18.534 "name": "Nvme$subsystem", 00:23:18.534 "trtype": "$TEST_TRANSPORT", 00:23:18.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.534 "adrfam": "ipv4", 00:23:18.534 "trsvcid": "$NVMF_PORT", 00:23:18.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.534 "hdgst": ${hdgst:-false}, 00:23:18.534 "ddgst": ${ddgst:-false} 00:23:18.534 }, 00:23:18.534 "method": "bdev_nvme_attach_controller" 00:23:18.534 } 00:23:18.534 EOF 00:23:18.534 )") 00:23:18.534 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.534 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:18.534 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:18.534 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:18.534 09:37:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:18.534 "params": { 00:23:18.534 "name": "Nvme1", 00:23:18.534 "trtype": "tcp", 00:23:18.534 "traddr": "10.0.0.2", 00:23:18.534 "adrfam": "ipv4", 00:23:18.534 "trsvcid": "4420", 00:23:18.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.534 "hdgst": false, 00:23:18.534 "ddgst": false 00:23:18.534 }, 00:23:18.534 "method": "bdev_nvme_attach_controller" 00:23:18.534 },{ 00:23:18.534 "params": { 00:23:18.534 "name": "Nvme2", 00:23:18.534 "trtype": "tcp", 00:23:18.534 "traddr": "10.0.0.2", 00:23:18.534 "adrfam": "ipv4", 00:23:18.534 "trsvcid": "4420", 00:23:18.534 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:18.534 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:18.534 "hdgst": false, 00:23:18.534 "ddgst": false 00:23:18.534 }, 00:23:18.534 "method": "bdev_nvme_attach_controller" 00:23:18.534 },{ 00:23:18.534 "params": { 00:23:18.534 "name": "Nvme3", 00:23:18.534 "trtype": "tcp", 00:23:18.534 "traddr": "10.0.0.2", 00:23:18.534 "adrfam": "ipv4", 00:23:18.534 "trsvcid": "4420", 00:23:18.534 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:18.534 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:18.534 "hdgst": false, 00:23:18.534 "ddgst": false 00:23:18.534 }, 00:23:18.534 "method": "bdev_nvme_attach_controller" 00:23:18.534 },{ 00:23:18.534 "params": { 00:23:18.534 "name": "Nvme4", 00:23:18.534 "trtype": "tcp", 00:23:18.534 "traddr": "10.0.0.2", 00:23:18.534 "adrfam": "ipv4", 00:23:18.534 "trsvcid": "4420", 00:23:18.534 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:18.534 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:18.534 "hdgst": false, 00:23:18.534 "ddgst": false 00:23:18.534 }, 00:23:18.534 "method": "bdev_nvme_attach_controller" 00:23:18.534 },{ 00:23:18.534 "params": { 00:23:18.534 "name": "Nvme5", 00:23:18.534 "trtype": "tcp", 00:23:18.534 "traddr": "10.0.0.2", 00:23:18.534 "adrfam": "ipv4", 00:23:18.534 "trsvcid": "4420", 00:23:18.534 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:18.534 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:18.534 "hdgst": false, 00:23:18.534 "ddgst": false 00:23:18.534 }, 00:23:18.534 "method": "bdev_nvme_attach_controller" 00:23:18.534 },{ 00:23:18.534 "params": { 00:23:18.534 "name": "Nvme6", 00:23:18.534 "trtype": "tcp", 00:23:18.534 "traddr": "10.0.0.2", 00:23:18.534 "adrfam": "ipv4", 00:23:18.534 "trsvcid": "4420", 00:23:18.534 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:18.534 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:18.534 "hdgst": false, 00:23:18.534 "ddgst": false 00:23:18.534 }, 00:23:18.534 "method": "bdev_nvme_attach_controller" 00:23:18.534 },{ 00:23:18.534 "params": { 00:23:18.534 "name": "Nvme7", 00:23:18.534 "trtype": "tcp", 00:23:18.534 "traddr": "10.0.0.2", 00:23:18.534 "adrfam": "ipv4", 00:23:18.534 "trsvcid": "4420", 00:23:18.534 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:18.534 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:18.534 "hdgst": false, 00:23:18.534 "ddgst": false 00:23:18.534 }, 00:23:18.534 "method": "bdev_nvme_attach_controller" 00:23:18.534 },{ 00:23:18.534 "params": { 00:23:18.534 "name": "Nvme8", 00:23:18.534 "trtype": "tcp", 00:23:18.534 "traddr": "10.0.0.2", 00:23:18.534 "adrfam": "ipv4", 00:23:18.534 "trsvcid": "4420", 00:23:18.534 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:18.534 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:18.534 "hdgst": false, 00:23:18.534 "ddgst": false 00:23:18.534 }, 00:23:18.534 "method": "bdev_nvme_attach_controller" 00:23:18.534 },{ 00:23:18.534 "params": { 00:23:18.534 "name": "Nvme9", 00:23:18.534 "trtype": "tcp", 00:23:18.534 "traddr": "10.0.0.2", 00:23:18.534 "adrfam": "ipv4", 00:23:18.534 "trsvcid": "4420", 00:23:18.534 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:18.534 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:18.534 "hdgst": false, 00:23:18.534 "ddgst": false 00:23:18.534 }, 00:23:18.534 "method": "bdev_nvme_attach_controller" 00:23:18.534 },{ 00:23:18.534 "params": { 00:23:18.534 "name": "Nvme10", 00:23:18.534 "trtype": "tcp", 00:23:18.534 "traddr": "10.0.0.2", 00:23:18.534 "adrfam": "ipv4", 00:23:18.534 "trsvcid": "4420", 00:23:18.534 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:18.534 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:18.534 "hdgst": false, 00:23:18.534 "ddgst": false 00:23:18.534 }, 00:23:18.534 "method": "bdev_nvme_attach_controller" 00:23:18.534 }' 00:23:18.534 [2024-06-11 09:37:50.203382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.534 [2024-06-11 09:37:50.267608] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.972 Running I/O for 10 seconds... 00:23:19.972 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:19.972 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:23:19.972 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:19.972 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:19.972 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:20.233 09:37:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:20.501 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:20.501 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:20.501 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:20.501 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:20.501 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.501 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.501 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.501 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:20.501 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:20.501 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:20.761 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:20.761 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:20.761 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:20.761 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:20.761 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.762 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.762 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.762 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:20.762 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:20.762 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:20.762 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:20.762 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:20.762 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1219376 00:23:20.762 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 1219376 ']' 00:23:20.762 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 1219376 00:23:20.762 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:23:20.762 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:20.762 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1219376 00:23:21.035 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:21.035 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:21.035 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1219376' 00:23:21.035 killing process with pid 1219376 00:23:21.035 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 1219376 00:23:21.035 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 1219376 00:23:21.035 [2024-06-11 09:37:52.590477] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989020 is same with the state(5) to be set 00:23:21.035 [2024-06-11 09:37:52.591286] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.035 [2024-06-11 09:37:52.591318] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.035 [2024-06-11 09:37:52.591326] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.035 [2024-06-11 09:37:52.591333] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.035 [2024-06-11 09:37:52.591340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591353] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591359] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591366] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591372] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591379] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591385] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591391] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591397] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591404] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591410] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591417] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591423] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591429] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591436] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591443] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591449] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591456] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591462] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591468] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591474] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591481] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591487] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591493] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591499] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591506] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591512] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591518] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591526] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591532] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591539] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591545] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591551] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591558] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591564] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591570] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591576] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591583] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591590] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591596] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591603] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591609] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591616] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591622] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591628] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591634] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591641] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591647] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591653] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591659] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591666] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591672] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591678] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591685] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591691] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591705] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591711] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.591717] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ba00 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.592961] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19894c0 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.593931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.593959] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.593967] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.593974] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.593981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.593988] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.593994] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594001] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594008] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594014] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594026] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594032] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594039] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594045] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594052] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594058] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594064] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594071] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594077] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594083] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594090] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594100] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.036 [2024-06-11 09:37:52.594106] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594113] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594119] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594126] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594133] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594139] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594146] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594152] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594158] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594164] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594171] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594177] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594184] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594190] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594196] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594202] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594208] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594215] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594221] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594227] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594234] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594240] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594246] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594252] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594258] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594264] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594271] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594279] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594285] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594291] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594297] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594303] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594310] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594321] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594328] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594334] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594346] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.594358] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989960 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595673] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595707] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595713] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595720] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595726] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595734] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595741] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595747] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595753] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595760] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595766] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595772] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595778] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595792] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595805] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595811] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595817] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595824] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595836] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595843] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595849] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595855] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595861] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595868] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595888] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595894] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595900] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595907] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595913] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595919] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595926] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595933] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595939] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595951] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595958] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595966] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.037 [2024-06-11 09:37:52.595972] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.595979] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.595985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.595992] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.595998] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596004] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596010] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596023] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596029] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596036] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596042] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596049] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596055] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596061] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596074] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596080] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596086] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596092] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.596098] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1989e20 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597286] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597293] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597299] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597306] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597320] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597327] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597333] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597346] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597358] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597365] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597371] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597377] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597384] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597390] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597397] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597403] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597409] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597415] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597421] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597428] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597434] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597440] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597447] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597454] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597460] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597466] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597472] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597479] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597485] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597493] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597499] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597505] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597511] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597518] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597525] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597531] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597537] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597543] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597549] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597555] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597562] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597568] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597574] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597580] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597586] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597593] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597599] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597605] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597611] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597617] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597624] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597630] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597636] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597642] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.038 [2024-06-11 09:37:52.597649] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.597655] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.597662] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.597668] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.597675] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.597682] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198a760 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598630] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598648] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598654] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598661] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598667] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598673] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598680] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598686] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598692] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598705] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598711] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598717] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598730] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598736] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598742] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598749] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598755] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598761] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598767] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598773] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598802] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598809] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598815] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598821] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598827] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598833] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598840] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598847] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598853] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598859] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598865] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598871] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598877] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598884] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598890] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598896] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598902] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598915] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598933] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598940] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598946] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598952] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598960] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598966] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598972] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598978] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598991] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.598997] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599003] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599010] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599022] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599028] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599035] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ac00 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599792] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599810] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599817] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599823] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599829] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599836] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599842] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599849] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599855] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599861] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.039 [2024-06-11 09:37:52.599868] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599874] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599880] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599886] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599893] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599903] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599916] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599922] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599928] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599934] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599941] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599947] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599954] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599960] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599966] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599973] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599979] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599992] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.599998] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600005] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600011] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600017] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600023] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600030] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600036] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600043] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600050] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600056] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600062] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600068] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600076] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600082] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600088] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600096] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600102] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600109] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600121] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600135] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600141] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600148] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600154] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600161] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600167] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600173] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600180] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600186] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600193] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600199] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.600205] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b0c0 is same with the state(5) to be set 00:23:21.040 [2024-06-11 09:37:52.603798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.040 [2024-06-11 09:37:52.603834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.040 [2024-06-11 09:37:52.603851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.040 [2024-06-11 09:37:52.603859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.040 [2024-06-11 09:37:52.603869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.040 [2024-06-11 09:37:52.603880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.040 [2024-06-11 09:37:52.603890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.040 [2024-06-11 09:37:52.603897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.603906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.603913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.603923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.603930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.603939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.603946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.603955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.603962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.603971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.603978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.603987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.603994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.041 [2024-06-11 09:37:52.604473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.041 [2024-06-11 09:37:52.604483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.042 [2024-06-11 09:37:52.604881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.604908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:21.042 [2024-06-11 09:37:52.604953] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12396c0 was disconnected and freed. reset controller. 00:23:21.042 [2024-06-11 09:37:52.605053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.042 [2024-06-11 09:37:52.605068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.605077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.042 [2024-06-11 09:37:52.605085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.605093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.042 [2024-06-11 09:37:52.605100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.605108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.042 [2024-06-11 09:37:52.605115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.605122] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd42610 is same with the state(5) to be set 00:23:21.042 [2024-06-11 09:37:52.605147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.042 [2024-06-11 09:37:52.605156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.605164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.042 [2024-06-11 09:37:52.605171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.605179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.042 [2024-06-11 09:37:52.605186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.605194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.042 [2024-06-11 09:37:52.605201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.605208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c2b0 is same with the state(5) to be set 00:23:21.042 [2024-06-11 09:37:52.605230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.042 [2024-06-11 09:37:52.605239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.605247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.042 [2024-06-11 09:37:52.605254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.605262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.042 [2024-06-11 09:37:52.605269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.042 [2024-06-11 09:37:52.605276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1406790 is same with the state(5) to be set 00:23:21.043 [2024-06-11 09:37:52.605322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124bc50 is same with the state(5) to be set 00:23:21.043 [2024-06-11 09:37:52.605407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286eb0 is same with the state(5) to be set 00:23:21.043 [2024-06-11 09:37:52.605490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268900 is same with the state(5) to be set 00:23:21.043 [2024-06-11 09:37:52.605573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1407a60 is same with the state(5) to be set 00:23:21.043 [2024-06-11 09:37:52.605655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123d650 is same with the state(5) to be set 00:23:21.043 [2024-06-11 09:37:52.605742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403f20 is same with the state(5) to be set 00:23:21.043 [2024-06-11 09:37:52.605846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.043 [2024-06-11 09:37:52.605900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.605907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fff90 is same with the state(5) to be set 00:23:21.043 [2024-06-11 09:37:52.606834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.043 [2024-06-11 09:37:52.606855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.606867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.043 [2024-06-11 09:37:52.606875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.606885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.043 [2024-06-11 09:37:52.606892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.606901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.043 [2024-06-11 09:37:52.606909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.043 [2024-06-11 09:37:52.606918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.606926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.606940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.606947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.606960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.606968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.606977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.606985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.606994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.607335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.607344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.617158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.617207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.617216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.617225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.617239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.617249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.617256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.617265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.617272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.617281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.617288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.617297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.617304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.617313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.617327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.617337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.617344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.617354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.617361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.617370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.617377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.617386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.617393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.617402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.044 [2024-06-11 09:37:52.617409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.044 [2024-06-11 09:37:52.617418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.617770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.617838] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13c9b60 was disconnected and freed. reset controller. 00:23:21.045 [2024-06-11 09:37:52.619293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd42610 (9): Bad file descriptor 00:23:21.045 [2024-06-11 09:37:52.619331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128c2b0 (9): Bad file descriptor 00:23:21.045 [2024-06-11 09:37:52.619350] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1406790 (9): Bad file descriptor 00:23:21.045 [2024-06-11 09:37:52.619364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124bc50 (9): Bad file descriptor 00:23:21.045 [2024-06-11 09:37:52.619381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1286eb0 (9): Bad file descriptor 00:23:21.045 [2024-06-11 09:37:52.619395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1268900 (9): Bad file descriptor 00:23:21.045 [2024-06-11 09:37:52.619409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1407a60 (9): Bad file descriptor 00:23:21.045 [2024-06-11 09:37:52.619424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123d650 (9): Bad file descriptor 00:23:21.045 [2024-06-11 09:37:52.619439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1403f20 (9): Bad file descriptor 00:23:21.045 [2024-06-11 09:37:52.619451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fff90 (9): Bad file descriptor 00:23:21.045 [2024-06-11 09:37:52.619532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.619543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.619558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.619565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.619574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.619581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.619591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.045 [2024-06-11 09:37:52.619598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.045 [2024-06-11 09:37:52.619607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.619985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.619992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.620001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.620008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.620017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.620024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.620033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.620040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.620049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.620056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.620065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.620071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.620080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.620087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.620096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.620103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.620112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.620119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.620128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.620135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.620144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.620153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.620162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.620169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.620178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.620185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.620194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.046 [2024-06-11 09:37:52.620201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.046 [2024-06-11 09:37:52.620210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.047 [2024-06-11 09:37:52.620585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.047 [2024-06-11 09:37:52.620640] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x136fc60 was disconnected and freed. reset controller. 00:23:21.047 [2024-06-11 09:37:52.622113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:21.047 [2024-06-11 09:37:52.625341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:21.047 [2024-06-11 09:37:52.625367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:21.047 [2024-06-11 09:37:52.625872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.047 [2024-06-11 09:37:52.625911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fff90 with addr=10.0.0.2, port=4420 00:23:21.047 [2024-06-11 09:37:52.625924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fff90 is same with the state(5) to be set 00:23:21.047 [2024-06-11 09:37:52.626040] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13c6a30 was disconnected and freed. reset controller. 00:23:21.047 [2024-06-11 09:37:52.626106] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.047 [2024-06-11 09:37:52.626150] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.047 [2024-06-11 09:37:52.626462] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.047 [2024-06-11 09:37:52.626504] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.047 [2024-06-11 09:37:52.626542] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.047 [2024-06-11 09:37:52.626592] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:21.047 [2024-06-11 09:37:52.627054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.047 [2024-06-11 09:37:52.627069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1286eb0 with addr=10.0.0.2, port=4420 00:23:21.047 [2024-06-11 09:37:52.627077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286eb0 is same with the state(5) to be set 00:23:21.047 [2024-06-11 09:37:52.627559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.047 [2024-06-11 09:37:52.627596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1407a60 with addr=10.0.0.2, port=4420 00:23:21.047 [2024-06-11 09:37:52.627607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1407a60 is same with the state(5) to be set 00:23:21.047 [2024-06-11 09:37:52.627622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fff90 (9): Bad file descriptor 00:23:21.047 [2024-06-11 09:37:52.628239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:21.047 [2024-06-11 09:37:52.628268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1286eb0 (9): Bad file descriptor 00:23:21.047 [2024-06-11 09:37:52.628278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1407a60 (9): Bad file descriptor 00:23:21.047 [2024-06-11 09:37:52.628286] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:21.047 [2024-06-11 09:37:52.628293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:21.047 [2024-06-11 09:37:52.628301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:21.047 [2024-06-11 09:37:52.628372] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.047 [2024-06-11 09:37:52.628794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.047 [2024-06-11 09:37:52.628807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123d650 with addr=10.0.0.2, port=4420 00:23:21.047 [2024-06-11 09:37:52.628815] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123d650 is same with the state(5) to be set 00:23:21.047 [2024-06-11 09:37:52.628822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:21.047 [2024-06-11 09:37:52.628827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:21.047 [2024-06-11 09:37:52.628834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:21.047 [2024-06-11 09:37:52.628845] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:21.047 [2024-06-11 09:37:52.628851] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:21.047 [2024-06-11 09:37:52.628858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:21.047 [2024-06-11 09:37:52.628905] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.047 [2024-06-11 09:37:52.628912] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.047 [2024-06-11 09:37:52.628920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123d650 (9): Bad file descriptor 00:23:21.048 [2024-06-11 09:37:52.628955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:21.048 [2024-06-11 09:37:52.628961] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:21.048 [2024-06-11 09:37:52.628967] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.048 [2024-06-11 09:37:52.629004] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.048 [2024-06-11 09:37:52.629422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.629990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.629999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.630006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.630015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.630021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.048 [2024-06-11 09:37:52.630031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.048 [2024-06-11 09:37:52.630038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.630480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.630488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7250 is same with the state(5) to be set 00:23:21.049 [2024-06-11 09:37:52.631781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.631795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.631809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.631817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.631829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.631838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.631849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.631857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.631870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.631877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.631886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.631893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.631902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.631909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.631919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.631926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.631935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.631942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.631951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.631958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.049 [2024-06-11 09:37:52.631967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.049 [2024-06-11 09:37:52.631975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.631984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.631991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.050 [2024-06-11 09:37:52.632584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.050 [2024-06-11 09:37:52.632591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.632865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.632873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8690 is same with the state(5) to be set 00:23:21.051 [2024-06-11 09:37:52.634135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.051 [2024-06-11 09:37:52.634510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.051 [2024-06-11 09:37:52.634519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.634990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.634997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.635007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.635015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.635023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.635030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.635039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.635046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.635055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.635062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.635071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.635078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.635087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.635094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.635103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.635110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.635119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.635126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.635135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.635142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.052 [2024-06-11 09:37:52.635151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.052 [2024-06-11 09:37:52.635158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.635167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.635175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.635184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.635191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.635199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12359f0 is same with the state(5) to be set 00:23:21.053 [2024-06-11 09:37:52.636464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.636984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.636991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.637000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.637007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.637016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.637023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.637032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.637039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.637048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.637056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.637065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.053 [2024-06-11 09:37:52.637072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.053 [2024-06-11 09:37:52.637081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.637521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.637529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236ec0 is same with the state(5) to be set 00:23:21.054 [2024-06-11 09:37:52.638788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.638800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.638812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.638819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.638828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.638836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.638845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.638852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.638861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.638868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.638878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.638885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.638894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.054 [2024-06-11 09:37:52.638901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.054 [2024-06-11 09:37:52.638911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.638918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.638927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.638934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.638943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.638950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.638959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.638967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.638981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.638988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.638997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.055 [2024-06-11 09:37:52.639553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.055 [2024-06-11 09:37:52.639560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.639839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.639846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12383c0 is same with the state(5) to be set 00:23:21.056 [2024-06-11 09:37:52.641363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.056 [2024-06-11 09:37:52.641725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.056 [2024-06-11 09:37:52.641734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.641985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.641993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.057 [2024-06-11 09:37:52.642378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.057 [2024-06-11 09:37:52.642385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.058 [2024-06-11 09:37:52.642395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.058 [2024-06-11 09:37:52.642402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.058 [2024-06-11 09:37:52.642410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.058 [2024-06-11 09:37:52.642417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.058 [2024-06-11 09:37:52.642425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bf390 is same with the state(5) to be set 00:23:21.058 [2024-06-11 09:37:52.643900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:21.058 [2024-06-11 09:37:52.643921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:21.058 [2024-06-11 09:37:52.643930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:21.058 [2024-06-11 09:37:52.643940] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:21.058 [2024-06-11 09:37:52.644016] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.058 [2024-06-11 09:37:52.644030] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.058 [2024-06-11 09:37:52.644099] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:21.058 task offset: 27904 on job bdev=Nvme9n1 fails 00:23:21.058 00:23:21.058 Latency(us) 00:23:21.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.058 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.058 Verification LBA range: start 0x0 length 0x400 00:23:21.058 Nvme1n1 : 0.95 203.08 12.69 0.00 0.00 311682.84 39540.05 283115.52 00:23:21.058 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.058 Job: Nvme2n1 ended in about 0.95 seconds with error 00:23:21.058 Verification LBA range: start 0x0 length 0x400 00:23:21.058 Nvme2n1 : 0.95 202.83 12.68 67.61 0.00 229276.37 17148.59 272629.76 00:23:21.058 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.058 Job: Nvme3n1 ended in about 0.95 seconds with error 00:23:21.058 Verification LBA range: start 0x0 length 0x400 00:23:21.058 Nvme3n1 : 0.95 201.37 12.59 67.12 0.00 226169.81 19988.48 248162.99 00:23:21.058 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.058 Job: Nvme4n1 ended in about 0.96 seconds with error 00:23:21.058 Verification LBA range: start 0x0 length 0x400 00:23:21.058 Nvme4n1 : 0.96 205.06 12.82 66.96 0.00 218605.83 18240.85 239424.85 00:23:21.058 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.058 Job: Nvme5n1 ended in about 0.94 seconds with error 00:23:21.058 Verification LBA range: start 0x0 length 0x400 00:23:21.058 Nvme5n1 : 0.94 203.45 12.72 67.82 0.00 214296.11 17476.27 242920.11 00:23:21.058 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.058 Job: Nvme6n1 ended in about 0.96 seconds with error 00:23:21.058 Verification LBA range: start 0x0 length 0x400 00:23:21.058 Nvme6n1 : 0.96 133.59 8.35 66.80 0.00 284313.03 21189.97 263891.63 00:23:21.058 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.058 Job: Nvme7n1 ended in about 0.96 seconds with error 00:23:21.058 Verification LBA range: start 0x0 length 0x400 00:23:21.058 Nvme7n1 : 0.96 199.90 12.49 66.63 0.00 208975.15 20643.84 249910.61 00:23:21.058 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.058 Job: Nvme8n1 ended in about 0.96 seconds with error 00:23:21.058 Verification LBA range: start 0x0 length 0x400 00:23:21.058 Nvme8n1 : 0.96 132.95 8.31 66.47 0.00 273240.46 18022.40 255153.49 00:23:21.058 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.058 Job: Nvme9n1 ended in about 0.94 seconds with error 00:23:21.058 Verification LBA range: start 0x0 length 0x400 00:23:21.058 Nvme9n1 : 0.94 204.04 12.75 68.01 0.00 194639.57 14527.15 221948.59 00:23:21.058 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.058 Job: Nvme10n1 ended in about 0.97 seconds with error 00:23:21.058 Verification LBA range: start 0x0 length 0x400 00:23:21.058 Nvme10n1 : 0.97 132.59 8.29 66.30 0.00 261587.63 19660.80 277872.64 00:23:21.058 =================================================================================================================== 00:23:21.058 Total : 1818.88 113.68 603.73 0.00 237753.51 14527.15 283115.52 00:23:21.058 [2024-06-11 09:37:52.669234] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:21.058 [2024-06-11 09:37:52.669281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:21.058 [2024-06-11 09:37:52.669774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.058 [2024-06-11 09:37:52.669793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1268900 with addr=10.0.0.2, port=4420 00:23:21.058 [2024-06-11 09:37:52.669803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1268900 is same with the state(5) to be set 00:23:21.058 [2024-06-11 09:37:52.670042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.058 [2024-06-11 09:37:52.670051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124bc50 with addr=10.0.0.2, port=4420 00:23:21.058 [2024-06-11 09:37:52.670058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124bc50 is same with the state(5) to be set 00:23:21.058 [2024-06-11 09:37:52.670567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.058 [2024-06-11 09:37:52.670605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd42610 with addr=10.0.0.2, port=4420 00:23:21.058 [2024-06-11 09:37:52.670615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd42610 is same with the state(5) to be set 00:23:21.058 [2024-06-11 09:37:52.670994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.058 [2024-06-11 09:37:52.671006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x128c2b0 with addr=10.0.0.2, port=4420 00:23:21.058 [2024-06-11 09:37:52.671013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128c2b0 is same with the state(5) to be set 00:23:21.058 [2024-06-11 09:37:52.672612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:21.058 [2024-06-11 09:37:52.672628] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:21.058 [2024-06-11 09:37:52.672637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:21.058 [2024-06-11 09:37:52.672651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:21.058 [2024-06-11 09:37:52.672926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.058 [2024-06-11 09:37:52.672941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1406790 with addr=10.0.0.2, port=4420 00:23:21.058 [2024-06-11 09:37:52.672948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1406790 is same with the state(5) to be set 00:23:21.058 [2024-06-11 09:37:52.673180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.058 [2024-06-11 09:37:52.673188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1403f20 with addr=10.0.0.2, port=4420 00:23:21.058 [2024-06-11 09:37:52.673195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403f20 is same with the state(5) to be set 00:23:21.058 [2024-06-11 09:37:52.673207] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1268900 (9): Bad file descriptor 00:23:21.058 [2024-06-11 09:37:52.673218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124bc50 (9): Bad file descriptor 00:23:21.058 [2024-06-11 09:37:52.673227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd42610 (9): Bad file descriptor 00:23:21.058 [2024-06-11 09:37:52.673236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128c2b0 (9): Bad file descriptor 00:23:21.058 [2024-06-11 09:37:52.673269] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.058 [2024-06-11 09:37:52.673281] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.058 [2024-06-11 09:37:52.673291] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.058 [2024-06-11 09:37:52.673302] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:21.058 [2024-06-11 09:37:52.673790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.058 [2024-06-11 09:37:52.673803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fff90 with addr=10.0.0.2, port=4420 00:23:21.058 [2024-06-11 09:37:52.673810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fff90 is same with the state(5) to be set 00:23:21.058 [2024-06-11 09:37:52.673988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.058 [2024-06-11 09:37:52.673996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1407a60 with addr=10.0.0.2, port=4420 00:23:21.058 [2024-06-11 09:37:52.674003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1407a60 is same with the state(5) to be set 00:23:21.058 [2024-06-11 09:37:52.674434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.058 [2024-06-11 09:37:52.674444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1286eb0 with addr=10.0.0.2, port=4420 00:23:21.058 [2024-06-11 09:37:52.674451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1286eb0 is same with the state(5) to be set 00:23:21.059 [2024-06-11 09:37:52.674810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.059 [2024-06-11 09:37:52.674819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123d650 with addr=10.0.0.2, port=4420 00:23:21.059 [2024-06-11 09:37:52.674826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123d650 is same with the state(5) to be set 00:23:21.059 [2024-06-11 09:37:52.674835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1406790 (9): Bad file descriptor 00:23:21.059 [2024-06-11 09:37:52.674844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1403f20 (9): Bad file descriptor 00:23:21.059 [2024-06-11 09:37:52.674855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:21.059 [2024-06-11 09:37:52.674862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:21.059 [2024-06-11 09:37:52.674870] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:21.059 [2024-06-11 09:37:52.674882] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:21.059 [2024-06-11 09:37:52.674888] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:21.059 [2024-06-11 09:37:52.674894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:21.059 [2024-06-11 09:37:52.674904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:21.059 [2024-06-11 09:37:52.674910] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:21.059 [2024-06-11 09:37:52.674917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:21.059 [2024-06-11 09:37:52.674926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:21.059 [2024-06-11 09:37:52.674933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:21.059 [2024-06-11 09:37:52.674939] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:21.059 [2024-06-11 09:37:52.675010] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.059 [2024-06-11 09:37:52.675018] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.059 [2024-06-11 09:37:52.675024] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.059 [2024-06-11 09:37:52.675030] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.059 [2024-06-11 09:37:52.675037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fff90 (9): Bad file descriptor 00:23:21.059 [2024-06-11 09:37:52.675046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1407a60 (9): Bad file descriptor 00:23:21.059 [2024-06-11 09:37:52.675055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1286eb0 (9): Bad file descriptor 00:23:21.059 [2024-06-11 09:37:52.675064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123d650 (9): Bad file descriptor 00:23:21.059 [2024-06-11 09:37:52.675072] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:21.059 [2024-06-11 09:37:52.675078] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:21.059 [2024-06-11 09:37:52.675084] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:21.059 [2024-06-11 09:37:52.675093] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:21.059 [2024-06-11 09:37:52.675099] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:21.059 [2024-06-11 09:37:52.675106] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:21.059 [2024-06-11 09:37:52.675132] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.059 [2024-06-11 09:37:52.675139] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.059 [2024-06-11 09:37:52.675145] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:21.059 [2024-06-11 09:37:52.675151] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:21.059 [2024-06-11 09:37:52.675159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:21.059 [2024-06-11 09:37:52.675168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:21.059 [2024-06-11 09:37:52.675174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:21.059 [2024-06-11 09:37:52.675181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:21.059 [2024-06-11 09:37:52.675189] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:21.059 [2024-06-11 09:37:52.675195] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:21.059 [2024-06-11 09:37:52.675202] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:21.059 [2024-06-11 09:37:52.675211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:21.059 [2024-06-11 09:37:52.675217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:21.059 [2024-06-11 09:37:52.675223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:21.059 [2024-06-11 09:37:52.675251] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.059 [2024-06-11 09:37:52.675257] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.059 [2024-06-11 09:37:52.675263] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.059 [2024-06-11 09:37:52.675270] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:21.319 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:21.319 09:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1219658 00:23:22.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1219658) - No such process 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:22.261 rmmod nvme_tcp 00:23:22.261 rmmod nvme_fabrics 00:23:22.261 rmmod nvme_keyring 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.261 09:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.810 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:24.810 00:23:24.810 real 0m7.830s 00:23:24.810 user 0m19.251s 00:23:24.810 sys 0m1.275s 00:23:24.810 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:24.810 09:37:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:24.810 ************************************ 00:23:24.810 END TEST nvmf_shutdown_tc3 00:23:24.810 ************************************ 00:23:24.810 09:37:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:24.810 00:23:24.810 real 0m31.972s 00:23:24.810 user 1m14.891s 00:23:24.810 sys 0m9.267s 00:23:24.810 09:37:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:24.810 09:37:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:24.810 ************************************ 00:23:24.810 END TEST nvmf_shutdown 00:23:24.810 ************************************ 00:23:24.810 09:37:56 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:23:24.810 09:37:56 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:24.810 09:37:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:24.810 09:37:56 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:23:24.810 09:37:56 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:24.810 09:37:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:24.810 09:37:56 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:23:24.810 09:37:56 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:24.810 09:37:56 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:24.810 09:37:56 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:24.810 09:37:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:24.810 ************************************ 00:23:24.810 START TEST nvmf_multicontroller 00:23:24.810 ************************************ 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:24.810 * Looking for test storage... 00:23:24.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.810 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:24.811 09:37:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:32.958 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:32.958 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.958 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:32.959 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:32.959 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:32.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:23:32.959 00:23:32.959 --- 10.0.0.2 ping statistics --- 00:23:32.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.959 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:23:32.959 00:23:32.959 --- 10.0.0.1 ping statistics --- 00:23:32.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.959 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1224710 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1224710 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 1224710 ']' 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:32.959 09:38:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.959 [2024-06-11 09:38:03.653526] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:23:32.959 [2024-06-11 09:38:03.653588] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.959 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.959 [2024-06-11 09:38:03.723697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:32.959 [2024-06-11 09:38:03.797341] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.959 [2024-06-11 09:38:03.797378] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.959 [2024-06-11 09:38:03.797385] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.959 [2024-06-11 09:38:03.797392] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.959 [2024-06-11 09:38:03.797397] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.959 [2024-06-11 09:38:03.797507] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.959 [2024-06-11 09:38:03.797667] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.959 [2024-06-11 09:38:03.797668] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.959 [2024-06-11 09:38:04.565471] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.959 Malloc0 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.959 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.960 [2024-06-11 09:38:04.630598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.960 [2024-06-11 09:38:04.642545] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.960 Malloc1 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1224978 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1224978 /var/tmp/bdevperf.sock 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 1224978 ']' 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:32.960 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.221 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:33.221 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:23:33.221 09:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:33.221 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.221 09:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.482 NVMe0n1 00:23:33.482 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:33.483 1 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.483 request: 00:23:33.483 { 00:23:33.483 "name": "NVMe0", 00:23:33.483 "trtype": "tcp", 00:23:33.483 "traddr": "10.0.0.2", 00:23:33.483 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:33.483 "hostaddr": "10.0.0.2", 00:23:33.483 "hostsvcid": "60000", 00:23:33.483 "adrfam": "ipv4", 00:23:33.483 "trsvcid": "4420", 00:23:33.483 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.483 "method": "bdev_nvme_attach_controller", 00:23:33.483 "req_id": 1 00:23:33.483 } 00:23:33.483 Got JSON-RPC error response 00:23:33.483 response: 00:23:33.483 { 00:23:33.483 "code": -114, 00:23:33.483 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:33.483 } 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.483 request: 00:23:33.483 { 00:23:33.483 "name": "NVMe0", 00:23:33.483 "trtype": "tcp", 00:23:33.483 "traddr": "10.0.0.2", 00:23:33.483 "hostaddr": "10.0.0.2", 00:23:33.483 "hostsvcid": "60000", 00:23:33.483 "adrfam": "ipv4", 00:23:33.483 "trsvcid": "4420", 00:23:33.483 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:33.483 "method": "bdev_nvme_attach_controller", 00:23:33.483 "req_id": 1 00:23:33.483 } 00:23:33.483 Got JSON-RPC error response 00:23:33.483 response: 00:23:33.483 { 00:23:33.483 "code": -114, 00:23:33.483 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:33.483 } 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.483 request: 00:23:33.483 { 00:23:33.483 "name": "NVMe0", 00:23:33.483 "trtype": "tcp", 00:23:33.483 "traddr": "10.0.0.2", 00:23:33.483 "hostaddr": "10.0.0.2", 00:23:33.483 "hostsvcid": "60000", 00:23:33.483 "adrfam": "ipv4", 00:23:33.483 "trsvcid": "4420", 00:23:33.483 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.483 "multipath": "disable", 00:23:33.483 "method": "bdev_nvme_attach_controller", 00:23:33.483 "req_id": 1 00:23:33.483 } 00:23:33.483 Got JSON-RPC error response 00:23:33.483 response: 00:23:33.483 { 00:23:33.483 "code": -114, 00:23:33.483 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:33.483 } 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:33.483 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.744 request: 00:23:33.744 { 00:23:33.744 "name": "NVMe0", 00:23:33.744 "trtype": "tcp", 00:23:33.744 "traddr": "10.0.0.2", 00:23:33.744 "hostaddr": "10.0.0.2", 00:23:33.744 "hostsvcid": "60000", 00:23:33.744 "adrfam": "ipv4", 00:23:33.744 "trsvcid": "4420", 00:23:33.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.744 "multipath": "failover", 00:23:33.744 "method": "bdev_nvme_attach_controller", 00:23:33.744 "req_id": 1 00:23:33.744 } 00:23:33.744 Got JSON-RPC error response 00:23:33.744 response: 00:23:33.744 { 00:23:33.744 "code": -114, 00:23:33.744 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:33.744 } 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.744 00:23:33.744 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:33.745 09:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:33.745 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.745 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.745 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:33.745 09:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:33.745 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.745 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:33.745 00:23:33.745 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:33.745 09:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:33.745 09:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:33.745 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.745 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.005 09:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.005 09:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:34.005 09:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:34.947 0 00:23:34.947 09:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:34.947 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.948 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.948 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.948 09:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1224978 00:23:34.948 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 1224978 ']' 00:23:34.948 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 1224978 00:23:34.948 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:23:34.948 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:34.948 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1224978 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1224978' 00:23:35.208 killing process with pid 1224978 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 1224978 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 1224978 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:23:35.208 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:23:35.208 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:35.208 [2024-06-11 09:38:04.760955] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:23:35.208 [2024-06-11 09:38:04.761006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224978 ] 00:23:35.208 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.208 [2024-06-11 09:38:04.836589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.208 [2024-06-11 09:38:04.900878] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.208 [2024-06-11 09:38:05.550412] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 8ef1f162-1a7d-4316-957b-68ce50fe2bf9 already exists 00:23:35.208 [2024-06-11 09:38:05.550440] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:8ef1f162-1a7d-4316-957b-68ce50fe2bf9 alias for bdev NVMe1n1 00:23:35.208 [2024-06-11 09:38:05.550449] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:35.208 Running I/O for 1 seconds... 00:23:35.208 00:23:35.208 Latency(us) 00:23:35.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.208 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:35.208 NVMe0n1 : 1.00 20338.61 79.45 0.00 0.00 6276.58 3986.77 16165.55 00:23:35.208 =================================================================================================================== 00:23:35.208 Total : 20338.61 79.45 0.00 0.00 6276.58 3986.77 16165.55 00:23:35.208 Received shutdown signal, test time was about 1.000000 seconds 00:23:35.208 00:23:35.208 Latency(us) 00:23:35.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.208 =================================================================================================================== 00:23:35.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.209 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:35.209 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:35.209 09:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:23:35.209 09:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:35.209 09:38:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.209 09:38:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:35.209 09:38:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.209 09:38:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:35.209 09:38:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.209 09:38:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.209 rmmod nvme_tcp 00:23:35.209 rmmod nvme_fabrics 00:23:35.209 rmmod nvme_keyring 00:23:35.209 09:38:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.209 09:38:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:35.209 09:38:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:35.209 09:38:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1224710 ']' 00:23:35.209 09:38:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1224710 00:23:35.209 09:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 1224710 ']' 00:23:35.209 09:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 1224710 00:23:35.209 09:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:23:35.209 09:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:35.209 09:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1224710 00:23:35.469 09:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:35.469 09:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:35.469 09:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1224710' 00:23:35.469 killing process with pid 1224710 00:23:35.469 09:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 1224710 00:23:35.469 09:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 1224710 00:23:35.469 09:38:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:35.469 09:38:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:35.469 09:38:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:35.469 09:38:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.469 09:38:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:35.469 09:38:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.469 09:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.469 09:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.018 09:38:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.018 00:23:38.018 real 0m13.083s 00:23:38.018 user 0m14.749s 00:23:38.018 sys 0m6.136s 00:23:38.018 09:38:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:38.018 09:38:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.018 ************************************ 00:23:38.018 END TEST nvmf_multicontroller 00:23:38.018 ************************************ 00:23:38.018 09:38:09 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:38.019 09:38:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:38.019 09:38:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:38.019 09:38:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:38.019 ************************************ 00:23:38.019 START TEST nvmf_aer 00:23:38.019 ************************************ 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:38.019 * Looking for test storage... 00:23:38.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.019 09:38:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:44.678 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:44.678 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:44.678 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:44.678 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:44.679 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:44.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:23:44.679 00:23:44.679 --- 10.0.0.2 ping statistics --- 00:23:44.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.679 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:23:44.679 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:23:44.941 00:23:44.941 --- 10.0.0.1 ping statistics --- 00:23:44.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.941 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1229423 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1229423 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 1229423 ']' 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:44.941 09:38:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.941 [2024-06-11 09:38:16.593176] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:23:44.941 [2024-06-11 09:38:16.593244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.941 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.941 [2024-06-11 09:38:16.683035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:45.203 [2024-06-11 09:38:16.781141] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.203 [2024-06-11 09:38:16.781199] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.203 [2024-06-11 09:38:16.781211] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.203 [2024-06-11 09:38:16.781218] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.203 [2024-06-11 09:38:16.781224] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.203 [2024-06-11 09:38:16.781409] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.203 [2024-06-11 09:38:16.781485] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.203 [2024-06-11 09:38:16.781665] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:45.203 [2024-06-11 09:38:16.781666] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.779 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:45.779 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:23:45.779 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:45.779 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:45.779 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.779 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.779 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:45.779 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.779 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.780 [2024-06-11 09:38:17.468902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.780 Malloc0 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.780 [2024-06-11 09:38:17.528301] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.780 [ 00:23:45.780 { 00:23:45.780 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:45.780 "subtype": "Discovery", 00:23:45.780 "listen_addresses": [], 00:23:45.780 "allow_any_host": true, 00:23:45.780 "hosts": [] 00:23:45.780 }, 00:23:45.780 { 00:23:45.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.780 "subtype": "NVMe", 00:23:45.780 "listen_addresses": [ 00:23:45.780 { 00:23:45.780 "trtype": "TCP", 00:23:45.780 "adrfam": "IPv4", 00:23:45.780 "traddr": "10.0.0.2", 00:23:45.780 "trsvcid": "4420" 00:23:45.780 } 00:23:45.780 ], 00:23:45.780 "allow_any_host": true, 00:23:45.780 "hosts": [], 00:23:45.780 "serial_number": "SPDK00000000000001", 00:23:45.780 "model_number": "SPDK bdev Controller", 00:23:45.780 "max_namespaces": 2, 00:23:45.780 "min_cntlid": 1, 00:23:45.780 "max_cntlid": 65519, 00:23:45.780 "namespaces": [ 00:23:45.780 { 00:23:45.780 "nsid": 1, 00:23:45.780 "bdev_name": "Malloc0", 00:23:45.780 "name": "Malloc0", 00:23:45.780 "nguid": "3F9F009903BC4E57BD00203586BBEE3D", 00:23:45.780 "uuid": "3f9f0099-03bc-4e57-bd00-203586bbee3d" 00:23:45.780 } 00:23:45.780 ] 00:23:45.780 } 00:23:45.780 ] 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1229772 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:23:45.780 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:23:46.041 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.041 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:46.041 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:23:46.041 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:23:46.041 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:46.042 Malloc1 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:46.042 Asynchronous Event Request test 00:23:46.042 Attaching to 10.0.0.2 00:23:46.042 Attached to 10.0.0.2 00:23:46.042 Registering asynchronous event callbacks... 00:23:46.042 Starting namespace attribute notice tests for all controllers... 00:23:46.042 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:46.042 aer_cb - Changed Namespace 00:23:46.042 Cleaning up... 00:23:46.042 [ 00:23:46.042 { 00:23:46.042 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:46.042 "subtype": "Discovery", 00:23:46.042 "listen_addresses": [], 00:23:46.042 "allow_any_host": true, 00:23:46.042 "hosts": [] 00:23:46.042 }, 00:23:46.042 { 00:23:46.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.042 "subtype": "NVMe", 00:23:46.042 "listen_addresses": [ 00:23:46.042 { 00:23:46.042 "trtype": "TCP", 00:23:46.042 "adrfam": "IPv4", 00:23:46.042 "traddr": "10.0.0.2", 00:23:46.042 "trsvcid": "4420" 00:23:46.042 } 00:23:46.042 ], 00:23:46.042 "allow_any_host": true, 00:23:46.042 "hosts": [], 00:23:46.042 "serial_number": "SPDK00000000000001", 00:23:46.042 "model_number": "SPDK bdev Controller", 00:23:46.042 "max_namespaces": 2, 00:23:46.042 "min_cntlid": 1, 00:23:46.042 "max_cntlid": 65519, 00:23:46.042 "namespaces": [ 00:23:46.042 { 00:23:46.042 "nsid": 1, 00:23:46.042 "bdev_name": "Malloc0", 00:23:46.042 "name": "Malloc0", 00:23:46.042 "nguid": "3F9F009903BC4E57BD00203586BBEE3D", 00:23:46.042 "uuid": "3f9f0099-03bc-4e57-bd00-203586bbee3d" 00:23:46.042 }, 00:23:46.042 { 00:23:46.042 "nsid": 2, 00:23:46.042 "bdev_name": "Malloc1", 00:23:46.042 "name": "Malloc1", 00:23:46.042 "nguid": "960F949AB4524376843E2E6D04117DBE", 00:23:46.042 "uuid": "960f949a-b452-4376-843e-2e6d04117dbe" 00:23:46.042 } 00:23:46.042 ] 00:23:46.042 } 00:23:46.042 ] 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1229772 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.042 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:46.303 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.303 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:46.303 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.303 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:46.304 rmmod nvme_tcp 00:23:46.304 rmmod nvme_fabrics 00:23:46.304 rmmod nvme_keyring 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1229423 ']' 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1229423 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 1229423 ']' 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 1229423 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1229423 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1229423' 00:23:46.304 killing process with pid 1229423 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 1229423 00:23:46.304 09:38:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 1229423 00:23:46.564 09:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:46.564 09:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:46.564 09:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:46.564 09:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:46.565 09:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:46.565 09:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.565 09:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.565 09:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.478 09:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:48.478 00:23:48.478 real 0m10.839s 00:23:48.478 user 0m7.509s 00:23:48.478 sys 0m5.667s 00:23:48.478 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:48.478 09:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.478 ************************************ 00:23:48.478 END TEST nvmf_aer 00:23:48.478 ************************************ 00:23:48.478 09:38:20 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:48.478 09:38:20 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:48.478 09:38:20 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:48.478 09:38:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:48.740 ************************************ 00:23:48.740 START TEST nvmf_async_init 00:23:48.740 ************************************ 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:48.740 * Looking for test storage... 00:23:48.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f6d7020afbbf44c6a9cddbbffc1a8dd6 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.740 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:48.741 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:48.741 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:48.741 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.741 09:38:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.741 09:38:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.741 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:48.741 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:48.741 09:38:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:48.741 09:38:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:56.884 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.884 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.884 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:56.885 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:56.885 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:56.885 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:56.885 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:23:56.885 00:23:56.885 --- 10.0.0.2 ping statistics --- 00:23:56.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.885 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:23:56.885 00:23:56.885 --- 10.0.0.1 ping statistics --- 00:23:56.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.885 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1234016 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1234016 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 1234016 ']' 00:23:56.885 09:38:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.886 09:38:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:56.886 09:38:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.886 09:38:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:56.886 09:38:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:56.886 [2024-06-11 09:38:27.747198] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:23:56.886 [2024-06-11 09:38:27.747261] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.886 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.886 [2024-06-11 09:38:27.816492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.886 [2024-06-11 09:38:27.911175] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.886 [2024-06-11 09:38:27.911229] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.886 [2024-06-11 09:38:27.911237] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.886 [2024-06-11 09:38:27.911243] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.886 [2024-06-11 09:38:27.911249] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.886 [2024-06-11 09:38:27.911282] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.886 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:56.886 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:23:56.886 09:38:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:56.886 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:56.886 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:56.886 09:38:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.886 09:38:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:56.886 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.886 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:56.886 [2024-06-11 09:38:28.683179] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.886 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:56.886 09:38:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:56.886 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:56.886 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.147 null0 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f6d7020afbbf44c6a9cddbbffc1a8dd6 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.147 [2024-06-11 09:38:28.743523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.147 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.409 nvme0n1 00:23:57.409 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.409 09:38:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:57.409 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.409 09:38:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.409 [ 00:23:57.409 { 00:23:57.409 "name": "nvme0n1", 00:23:57.409 "aliases": [ 00:23:57.409 "f6d7020a-fbbf-44c6-a9cd-dbbffc1a8dd6" 00:23:57.409 ], 00:23:57.409 "product_name": "NVMe disk", 00:23:57.409 "block_size": 512, 00:23:57.409 "num_blocks": 2097152, 00:23:57.409 "uuid": "f6d7020a-fbbf-44c6-a9cd-dbbffc1a8dd6", 00:23:57.409 "assigned_rate_limits": { 00:23:57.409 "rw_ios_per_sec": 0, 00:23:57.409 "rw_mbytes_per_sec": 0, 00:23:57.409 "r_mbytes_per_sec": 0, 00:23:57.409 "w_mbytes_per_sec": 0 00:23:57.409 }, 00:23:57.409 "claimed": false, 00:23:57.409 "zoned": false, 00:23:57.409 "supported_io_types": { 00:23:57.409 "read": true, 00:23:57.409 "write": true, 00:23:57.409 "unmap": false, 00:23:57.409 "write_zeroes": true, 00:23:57.409 "flush": true, 00:23:57.409 "reset": true, 00:23:57.409 "compare": true, 00:23:57.409 "compare_and_write": true, 00:23:57.409 "abort": true, 00:23:57.409 "nvme_admin": true, 00:23:57.409 "nvme_io": true 00:23:57.409 }, 00:23:57.409 "memory_domains": [ 00:23:57.409 { 00:23:57.409 "dma_device_id": "system", 00:23:57.409 "dma_device_type": 1 00:23:57.409 } 00:23:57.409 ], 00:23:57.409 "driver_specific": { 00:23:57.409 "nvme": [ 00:23:57.409 { 00:23:57.409 "trid": { 00:23:57.409 "trtype": "TCP", 00:23:57.409 "adrfam": "IPv4", 00:23:57.409 "traddr": "10.0.0.2", 00:23:57.409 "trsvcid": "4420", 00:23:57.409 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:57.409 }, 00:23:57.409 "ctrlr_data": { 00:23:57.409 "cntlid": 1, 00:23:57.409 "vendor_id": "0x8086", 00:23:57.409 "model_number": "SPDK bdev Controller", 00:23:57.409 "serial_number": "00000000000000000000", 00:23:57.409 "firmware_revision": "24.09", 00:23:57.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:57.409 "oacs": { 00:23:57.409 "security": 0, 00:23:57.409 "format": 0, 00:23:57.409 "firmware": 0, 00:23:57.409 "ns_manage": 0 00:23:57.409 }, 00:23:57.409 "multi_ctrlr": true, 00:23:57.409 "ana_reporting": false 00:23:57.409 }, 00:23:57.409 "vs": { 00:23:57.409 "nvme_version": "1.3" 00:23:57.409 }, 00:23:57.409 "ns_data": { 00:23:57.409 "id": 1, 00:23:57.409 "can_share": true 00:23:57.409 } 00:23:57.409 } 00:23:57.409 ], 00:23:57.409 "mp_policy": "active_passive" 00:23:57.409 } 00:23:57.409 } 00:23:57.409 ] 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.409 [2024-06-11 09:38:29.016740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:57.409 [2024-06-11 09:38:29.016830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d4b20 (9): Bad file descriptor 00:23:57.409 [2024-06-11 09:38:29.148422] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.409 [ 00:23:57.409 { 00:23:57.409 "name": "nvme0n1", 00:23:57.409 "aliases": [ 00:23:57.409 "f6d7020a-fbbf-44c6-a9cd-dbbffc1a8dd6" 00:23:57.409 ], 00:23:57.409 "product_name": "NVMe disk", 00:23:57.409 "block_size": 512, 00:23:57.409 "num_blocks": 2097152, 00:23:57.409 "uuid": "f6d7020a-fbbf-44c6-a9cd-dbbffc1a8dd6", 00:23:57.409 "assigned_rate_limits": { 00:23:57.409 "rw_ios_per_sec": 0, 00:23:57.409 "rw_mbytes_per_sec": 0, 00:23:57.409 "r_mbytes_per_sec": 0, 00:23:57.409 "w_mbytes_per_sec": 0 00:23:57.409 }, 00:23:57.409 "claimed": false, 00:23:57.409 "zoned": false, 00:23:57.409 "supported_io_types": { 00:23:57.409 "read": true, 00:23:57.409 "write": true, 00:23:57.409 "unmap": false, 00:23:57.409 "write_zeroes": true, 00:23:57.409 "flush": true, 00:23:57.409 "reset": true, 00:23:57.409 "compare": true, 00:23:57.409 "compare_and_write": true, 00:23:57.409 "abort": true, 00:23:57.409 "nvme_admin": true, 00:23:57.409 "nvme_io": true 00:23:57.409 }, 00:23:57.409 "memory_domains": [ 00:23:57.409 { 00:23:57.409 "dma_device_id": "system", 00:23:57.409 "dma_device_type": 1 00:23:57.409 } 00:23:57.409 ], 00:23:57.409 "driver_specific": { 00:23:57.409 "nvme": [ 00:23:57.409 { 00:23:57.409 "trid": { 00:23:57.409 "trtype": "TCP", 00:23:57.409 "adrfam": "IPv4", 00:23:57.409 "traddr": "10.0.0.2", 00:23:57.409 "trsvcid": "4420", 00:23:57.409 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:57.409 }, 00:23:57.409 "ctrlr_data": { 00:23:57.409 "cntlid": 2, 00:23:57.409 "vendor_id": "0x8086", 00:23:57.409 "model_number": "SPDK bdev Controller", 00:23:57.409 "serial_number": "00000000000000000000", 00:23:57.409 "firmware_revision": "24.09", 00:23:57.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:57.409 "oacs": { 00:23:57.409 "security": 0, 00:23:57.409 "format": 0, 00:23:57.409 "firmware": 0, 00:23:57.409 "ns_manage": 0 00:23:57.409 }, 00:23:57.409 "multi_ctrlr": true, 00:23:57.409 "ana_reporting": false 00:23:57.409 }, 00:23:57.409 "vs": { 00:23:57.409 "nvme_version": "1.3" 00:23:57.409 }, 00:23:57.409 "ns_data": { 00:23:57.409 "id": 1, 00:23:57.409 "can_share": true 00:23:57.409 } 00:23:57.409 } 00:23:57.409 ], 00:23:57.409 "mp_policy": "active_passive" 00:23:57.409 } 00:23:57.409 } 00:23:57.409 ] 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.BSWXMWKLLE 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.BSWXMWKLLE 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.409 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.409 [2024-06-11 09:38:29.221404] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:57.409 [2024-06-11 09:38:29.221577] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BSWXMWKLLE 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.671 [2024-06-11 09:38:29.233427] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BSWXMWKLLE 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.671 [2024-06-11 09:38:29.245455] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.671 [2024-06-11 09:38:29.245508] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:57.671 nvme0n1 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.671 [ 00:23:57.671 { 00:23:57.671 "name": "nvme0n1", 00:23:57.671 "aliases": [ 00:23:57.671 "f6d7020a-fbbf-44c6-a9cd-dbbffc1a8dd6" 00:23:57.671 ], 00:23:57.671 "product_name": "NVMe disk", 00:23:57.671 "block_size": 512, 00:23:57.671 "num_blocks": 2097152, 00:23:57.671 "uuid": "f6d7020a-fbbf-44c6-a9cd-dbbffc1a8dd6", 00:23:57.671 "assigned_rate_limits": { 00:23:57.671 "rw_ios_per_sec": 0, 00:23:57.671 "rw_mbytes_per_sec": 0, 00:23:57.671 "r_mbytes_per_sec": 0, 00:23:57.671 "w_mbytes_per_sec": 0 00:23:57.671 }, 00:23:57.671 "claimed": false, 00:23:57.671 "zoned": false, 00:23:57.671 "supported_io_types": { 00:23:57.671 "read": true, 00:23:57.671 "write": true, 00:23:57.671 "unmap": false, 00:23:57.671 "write_zeroes": true, 00:23:57.671 "flush": true, 00:23:57.671 "reset": true, 00:23:57.671 "compare": true, 00:23:57.671 "compare_and_write": true, 00:23:57.671 "abort": true, 00:23:57.671 "nvme_admin": true, 00:23:57.671 "nvme_io": true 00:23:57.671 }, 00:23:57.671 "memory_domains": [ 00:23:57.671 { 00:23:57.671 "dma_device_id": "system", 00:23:57.671 "dma_device_type": 1 00:23:57.671 } 00:23:57.671 ], 00:23:57.671 "driver_specific": { 00:23:57.671 "nvme": [ 00:23:57.671 { 00:23:57.671 "trid": { 00:23:57.671 "trtype": "TCP", 00:23:57.671 "adrfam": "IPv4", 00:23:57.671 "traddr": "10.0.0.2", 00:23:57.671 "trsvcid": "4421", 00:23:57.671 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:57.671 }, 00:23:57.671 "ctrlr_data": { 00:23:57.671 "cntlid": 3, 00:23:57.671 "vendor_id": "0x8086", 00:23:57.671 "model_number": "SPDK bdev Controller", 00:23:57.671 "serial_number": "00000000000000000000", 00:23:57.671 "firmware_revision": "24.09", 00:23:57.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:57.671 "oacs": { 00:23:57.671 "security": 0, 00:23:57.671 "format": 0, 00:23:57.671 "firmware": 0, 00:23:57.671 "ns_manage": 0 00:23:57.671 }, 00:23:57.671 "multi_ctrlr": true, 00:23:57.671 "ana_reporting": false 00:23:57.671 }, 00:23:57.671 "vs": { 00:23:57.671 "nvme_version": "1.3" 00:23:57.671 }, 00:23:57.671 "ns_data": { 00:23:57.671 "id": 1, 00:23:57.671 "can_share": true 00:23:57.671 } 00:23:57.671 } 00:23:57.671 ], 00:23:57.671 "mp_policy": "active_passive" 00:23:57.671 } 00:23:57.671 } 00:23:57.671 ] 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.BSWXMWKLLE 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:57.671 rmmod nvme_tcp 00:23:57.671 rmmod nvme_fabrics 00:23:57.671 rmmod nvme_keyring 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1234016 ']' 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1234016 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 1234016 ']' 00:23:57.671 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 1234016 00:23:57.672 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:23:57.672 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:57.672 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1234016 00:23:57.932 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:57.932 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:57.932 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1234016' 00:23:57.932 killing process with pid 1234016 00:23:57.932 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 1234016 00:23:57.932 [2024-06-11 09:38:29.498885] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:57.932 [2024-06-11 09:38:29.498925] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:57.932 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 1234016 00:23:57.932 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:57.932 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:57.932 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:57.932 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:57.932 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:57.932 09:38:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.932 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.932 09:38:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.478 09:38:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:00.478 00:24:00.478 real 0m11.435s 00:24:00.478 user 0m4.202s 00:24:00.478 sys 0m5.821s 00:24:00.478 09:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:00.478 09:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:00.478 ************************************ 00:24:00.478 END TEST nvmf_async_init 00:24:00.478 ************************************ 00:24:00.478 09:38:31 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:00.478 09:38:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:00.478 09:38:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:00.478 09:38:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:00.478 ************************************ 00:24:00.478 START TEST dma 00:24:00.478 ************************************ 00:24:00.478 09:38:31 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:00.478 * Looking for test storage... 00:24:00.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.478 09:38:31 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.478 09:38:31 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.478 09:38:31 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.478 09:38:31 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.478 09:38:31 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.478 09:38:31 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.478 09:38:31 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.478 09:38:31 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:00.478 09:38:31 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:00.478 09:38:31 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:00.478 09:38:31 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:00.478 09:38:31 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:00.478 00:24:00.478 real 0m0.134s 00:24:00.478 user 0m0.057s 00:24:00.478 sys 0m0.085s 00:24:00.478 09:38:31 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:00.478 09:38:31 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:00.478 ************************************ 00:24:00.478 END TEST dma 00:24:00.478 ************************************ 00:24:00.478 09:38:31 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:00.478 09:38:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:00.478 09:38:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:00.478 09:38:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:00.478 ************************************ 00:24:00.478 START TEST nvmf_identify 00:24:00.478 ************************************ 00:24:00.478 09:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:00.478 * Looking for test storage... 00:24:00.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.478 09:38:32 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.478 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:00.478 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:00.479 09:38:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:07.072 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:07.072 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.072 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:07.333 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:07.334 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:07.334 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.334 09:38:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.334 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.334 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.334 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:07.334 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.334 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.594 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.594 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:07.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:24:07.594 00:24:07.594 --- 10.0.0.2 ping statistics --- 00:24:07.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.594 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:24:07.594 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:24:07.594 00:24:07.594 --- 10.0.0.1 ping statistics --- 00:24:07.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.594 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:24:07.594 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.594 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:07.594 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:07.594 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.594 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:07.594 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1238481 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1238481 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 1238481 ']' 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:07.595 09:38:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:07.595 [2024-06-11 09:38:39.275834] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:24:07.595 [2024-06-11 09:38:39.275882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.595 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.595 [2024-06-11 09:38:39.361511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:07.855 [2024-06-11 09:38:39.429898] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.855 [2024-06-11 09:38:39.429935] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.855 [2024-06-11 09:38:39.429943] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.855 [2024-06-11 09:38:39.429952] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.855 [2024-06-11 09:38:39.429957] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.855 [2024-06-11 09:38:39.433332] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.855 [2024-06-11 09:38:39.433371] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.855 [2024-06-11 09:38:39.433525] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:07.855 [2024-06-11 09:38:39.433526] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.427 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:08.427 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:24:08.427 09:38:40 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:08.427 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.427 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:08.427 [2024-06-11 09:38:40.157058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.427 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.427 09:38:40 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:08.427 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:08.427 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:08.427 09:38:40 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:08.427 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.427 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:08.427 Malloc0 00:24:08.427 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.427 09:38:40 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:08.428 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.428 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:08.428 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.428 09:38:40 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:08.428 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.428 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:08.691 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.691 09:38:40 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.691 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.691 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:08.691 [2024-06-11 09:38:40.256491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.691 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.691 09:38:40 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:08.691 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.691 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:08.691 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.691 09:38:40 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:08.691 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.691 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:08.691 [ 00:24:08.691 { 00:24:08.691 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:08.691 "subtype": "Discovery", 00:24:08.691 "listen_addresses": [ 00:24:08.691 { 00:24:08.691 "trtype": "TCP", 00:24:08.691 "adrfam": "IPv4", 00:24:08.691 "traddr": "10.0.0.2", 00:24:08.691 "trsvcid": "4420" 00:24:08.691 } 00:24:08.691 ], 00:24:08.691 "allow_any_host": true, 00:24:08.691 "hosts": [] 00:24:08.691 }, 00:24:08.691 { 00:24:08.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.691 "subtype": "NVMe", 00:24:08.691 "listen_addresses": [ 00:24:08.691 { 00:24:08.691 "trtype": "TCP", 00:24:08.691 "adrfam": "IPv4", 00:24:08.691 "traddr": "10.0.0.2", 00:24:08.691 "trsvcid": "4420" 00:24:08.691 } 00:24:08.691 ], 00:24:08.691 "allow_any_host": true, 00:24:08.691 "hosts": [], 00:24:08.691 "serial_number": "SPDK00000000000001", 00:24:08.691 "model_number": "SPDK bdev Controller", 00:24:08.691 "max_namespaces": 32, 00:24:08.691 "min_cntlid": 1, 00:24:08.691 "max_cntlid": 65519, 00:24:08.691 "namespaces": [ 00:24:08.691 { 00:24:08.691 "nsid": 1, 00:24:08.691 "bdev_name": "Malloc0", 00:24:08.691 "name": "Malloc0", 00:24:08.691 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:08.691 "eui64": "ABCDEF0123456789", 00:24:08.691 "uuid": "e2cf41d8-6465-48dc-84c2-95b0cc62a651" 00:24:08.691 } 00:24:08.691 ] 00:24:08.691 } 00:24:08.691 ] 00:24:08.691 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.691 09:38:40 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:08.691 [2024-06-11 09:38:40.313594] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:24:08.691 [2024-06-11 09:38:40.313656] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238789 ] 00:24:08.691 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.691 [2024-06-11 09:38:40.346950] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:08.691 [2024-06-11 09:38:40.346998] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:08.691 [2024-06-11 09:38:40.347003] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:08.691 [2024-06-11 09:38:40.347014] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:08.691 [2024-06-11 09:38:40.347022] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:08.691 [2024-06-11 09:38:40.350352] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:08.691 [2024-06-11 09:38:40.350385] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1142ec0 0 00:24:08.691 [2024-06-11 09:38:40.358324] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:08.691 [2024-06-11 09:38:40.358335] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:08.691 [2024-06-11 09:38:40.358339] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:08.691 [2024-06-11 09:38:40.358342] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:08.691 [2024-06-11 09:38:40.358377] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.691 [2024-06-11 09:38:40.358383] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.691 [2024-06-11 09:38:40.358387] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142ec0) 00:24:08.692 [2024-06-11 09:38:40.358400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:08.692 [2024-06-11 09:38:40.358416] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5df0, cid 0, qid 0 00:24:08.692 [2024-06-11 09:38:40.366328] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.692 [2024-06-11 09:38:40.366338] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.692 [2024-06-11 09:38:40.366342] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.366346] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c5df0) on tqpair=0x1142ec0 00:24:08.692 [2024-06-11 09:38:40.366357] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:08.692 [2024-06-11 09:38:40.366376] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:08.692 [2024-06-11 09:38:40.366382] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:08.692 [2024-06-11 09:38:40.366396] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.366400] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.366408] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142ec0) 00:24:08.692 [2024-06-11 09:38:40.366416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.692 [2024-06-11 09:38:40.366429] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5df0, cid 0, qid 0 00:24:08.692 [2024-06-11 09:38:40.366643] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.692 [2024-06-11 09:38:40.366650] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.692 [2024-06-11 09:38:40.366654] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.366658] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c5df0) on tqpair=0x1142ec0 00:24:08.692 [2024-06-11 09:38:40.366663] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:08.692 [2024-06-11 09:38:40.366670] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:08.692 [2024-06-11 09:38:40.366677] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.366681] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.366684] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142ec0) 00:24:08.692 [2024-06-11 09:38:40.366691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.692 [2024-06-11 09:38:40.366701] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5df0, cid 0, qid 0 00:24:08.692 [2024-06-11 09:38:40.366901] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.692 [2024-06-11 09:38:40.366907] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.692 [2024-06-11 09:38:40.366910] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.366914] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c5df0) on tqpair=0x1142ec0 00:24:08.692 [2024-06-11 09:38:40.366920] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:08.692 [2024-06-11 09:38:40.366928] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:08.692 [2024-06-11 09:38:40.366934] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.366938] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.366941] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142ec0) 00:24:08.692 [2024-06-11 09:38:40.366948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.692 [2024-06-11 09:38:40.366958] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5df0, cid 0, qid 0 00:24:08.692 [2024-06-11 09:38:40.367169] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.692 [2024-06-11 09:38:40.367176] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.692 [2024-06-11 09:38:40.367179] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.367183] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c5df0) on tqpair=0x1142ec0 00:24:08.692 [2024-06-11 09:38:40.367189] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:08.692 [2024-06-11 09:38:40.367197] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.367201] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.367205] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142ec0) 00:24:08.692 [2024-06-11 09:38:40.367211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.692 [2024-06-11 09:38:40.367224] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5df0, cid 0, qid 0 00:24:08.692 [2024-06-11 09:38:40.367432] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.692 [2024-06-11 09:38:40.367439] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.692 [2024-06-11 09:38:40.367442] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.367446] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c5df0) on tqpair=0x1142ec0 00:24:08.692 [2024-06-11 09:38:40.367451] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:08.692 [2024-06-11 09:38:40.367456] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:08.692 [2024-06-11 09:38:40.367463] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:08.692 [2024-06-11 09:38:40.367569] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:08.692 [2024-06-11 09:38:40.367574] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:08.692 [2024-06-11 09:38:40.367581] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.367585] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.367588] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142ec0) 00:24:08.692 [2024-06-11 09:38:40.367595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.692 [2024-06-11 09:38:40.367605] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5df0, cid 0, qid 0 00:24:08.692 [2024-06-11 09:38:40.367796] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.692 [2024-06-11 09:38:40.367803] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.692 [2024-06-11 09:38:40.367806] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.367810] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c5df0) on tqpair=0x1142ec0 00:24:08.692 [2024-06-11 09:38:40.367815] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:08.692 [2024-06-11 09:38:40.367824] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.367828] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.367831] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142ec0) 00:24:08.692 [2024-06-11 09:38:40.367838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.692 [2024-06-11 09:38:40.367847] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5df0, cid 0, qid 0 00:24:08.692 [2024-06-11 09:38:40.368055] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.692 [2024-06-11 09:38:40.368061] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.692 [2024-06-11 09:38:40.368064] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.368068] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c5df0) on tqpair=0x1142ec0 00:24:08.692 [2024-06-11 09:38:40.368073] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:08.692 [2024-06-11 09:38:40.368078] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:08.692 [2024-06-11 09:38:40.368085] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:08.692 [2024-06-11 09:38:40.368095] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:08.692 [2024-06-11 09:38:40.368104] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.368108] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142ec0) 00:24:08.692 [2024-06-11 09:38:40.368114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.692 [2024-06-11 09:38:40.368124] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5df0, cid 0, qid 0 00:24:08.692 [2024-06-11 09:38:40.368357] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:08.692 [2024-06-11 09:38:40.368365] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:08.692 [2024-06-11 09:38:40.368368] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.368372] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1142ec0): datao=0, datal=4096, cccid=0 00:24:08.692 [2024-06-11 09:38:40.368377] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11c5df0) on tqpair(0x1142ec0): expected_datao=0, payload_size=4096 00:24:08.692 [2024-06-11 09:38:40.368381] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.368389] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.368393] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.413322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.692 [2024-06-11 09:38:40.413332] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.692 [2024-06-11 09:38:40.413336] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.692 [2024-06-11 09:38:40.413340] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c5df0) on tqpair=0x1142ec0 00:24:08.692 [2024-06-11 09:38:40.413348] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:08.692 [2024-06-11 09:38:40.413353] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:08.692 [2024-06-11 09:38:40.413358] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:08.692 [2024-06-11 09:38:40.413366] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:08.693 [2024-06-11 09:38:40.413371] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:08.693 [2024-06-11 09:38:40.413375] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:08.693 [2024-06-11 09:38:40.413383] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:08.693 [2024-06-11 09:38:40.413390] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.413394] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.413398] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142ec0) 00:24:08.693 [2024-06-11 09:38:40.413405] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:08.693 [2024-06-11 09:38:40.413417] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5df0, cid 0, qid 0 00:24:08.693 [2024-06-11 09:38:40.413606] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.693 [2024-06-11 09:38:40.413612] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.693 [2024-06-11 09:38:40.413615] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.413622] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c5df0) on tqpair=0x1142ec0 00:24:08.693 [2024-06-11 09:38:40.413630] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.413633] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.413637] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1142ec0) 00:24:08.693 [2024-06-11 09:38:40.413643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.693 [2024-06-11 09:38:40.413649] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.413653] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.413656] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1142ec0) 00:24:08.693 [2024-06-11 09:38:40.413662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.693 [2024-06-11 09:38:40.413668] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.413671] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.413675] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1142ec0) 00:24:08.693 [2024-06-11 09:38:40.413680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.693 [2024-06-11 09:38:40.413686] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.413690] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.413693] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.693 [2024-06-11 09:38:40.413699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.693 [2024-06-11 09:38:40.413704] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:08.693 [2024-06-11 09:38:40.413714] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:08.693 [2024-06-11 09:38:40.413721] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.413724] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1142ec0) 00:24:08.693 [2024-06-11 09:38:40.413731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.693 [2024-06-11 09:38:40.413743] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5df0, cid 0, qid 0 00:24:08.693 [2024-06-11 09:38:40.413748] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5f50, cid 1, qid 0 00:24:08.693 [2024-06-11 09:38:40.413752] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c60b0, cid 2, qid 0 00:24:08.693 [2024-06-11 09:38:40.413757] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.693 [2024-06-11 09:38:40.413762] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6370, cid 4, qid 0 00:24:08.693 [2024-06-11 09:38:40.414012] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.693 [2024-06-11 09:38:40.414018] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.693 [2024-06-11 09:38:40.414022] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414026] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6370) on tqpair=0x1142ec0 00:24:08.693 [2024-06-11 09:38:40.414031] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:08.693 [2024-06-11 09:38:40.414036] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:08.693 [2024-06-11 09:38:40.414049] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414052] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1142ec0) 00:24:08.693 [2024-06-11 09:38:40.414059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.693 [2024-06-11 09:38:40.414068] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6370, cid 4, qid 0 00:24:08.693 [2024-06-11 09:38:40.414296] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:08.693 [2024-06-11 09:38:40.414303] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:08.693 [2024-06-11 09:38:40.414306] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414310] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1142ec0): datao=0, datal=4096, cccid=4 00:24:08.693 [2024-06-11 09:38:40.414319] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11c6370) on tqpair(0x1142ec0): expected_datao=0, payload_size=4096 00:24:08.693 [2024-06-11 09:38:40.414324] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414330] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414334] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414585] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.693 [2024-06-11 09:38:40.414591] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.693 [2024-06-11 09:38:40.414594] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414598] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6370) on tqpair=0x1142ec0 00:24:08.693 [2024-06-11 09:38:40.414610] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:08.693 [2024-06-11 09:38:40.414634] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414638] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1142ec0) 00:24:08.693 [2024-06-11 09:38:40.414644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.693 [2024-06-11 09:38:40.414651] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414655] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414658] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1142ec0) 00:24:08.693 [2024-06-11 09:38:40.414664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.693 [2024-06-11 09:38:40.414677] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6370, cid 4, qid 0 00:24:08.693 [2024-06-11 09:38:40.414682] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c64d0, cid 5, qid 0 00:24:08.693 [2024-06-11 09:38:40.414906] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:08.693 [2024-06-11 09:38:40.414912] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:08.693 [2024-06-11 09:38:40.414916] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414919] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1142ec0): datao=0, datal=1024, cccid=4 00:24:08.693 [2024-06-11 09:38:40.414924] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11c6370) on tqpair(0x1142ec0): expected_datao=0, payload_size=1024 00:24:08.693 [2024-06-11 09:38:40.414928] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414934] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414938] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.693 [2024-06-11 09:38:40.414949] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.693 [2024-06-11 09:38:40.414954] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.414958] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c64d0) on tqpair=0x1142ec0 00:24:08.693 [2024-06-11 09:38:40.455570] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.693 [2024-06-11 09:38:40.455582] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.693 [2024-06-11 09:38:40.455586] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.455590] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6370) on tqpair=0x1142ec0 00:24:08.693 [2024-06-11 09:38:40.455605] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.455609] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1142ec0) 00:24:08.693 [2024-06-11 09:38:40.455616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.693 [2024-06-11 09:38:40.455630] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6370, cid 4, qid 0 00:24:08.693 [2024-06-11 09:38:40.455871] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:08.693 [2024-06-11 09:38:40.455878] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:08.693 [2024-06-11 09:38:40.455881] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.455885] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1142ec0): datao=0, datal=3072, cccid=4 00:24:08.693 [2024-06-11 09:38:40.455890] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11c6370) on tqpair(0x1142ec0): expected_datao=0, payload_size=3072 00:24:08.693 [2024-06-11 09:38:40.455894] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.455901] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.455904] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.456069] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.693 [2024-06-11 09:38:40.456075] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.693 [2024-06-11 09:38:40.456078] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.693 [2024-06-11 09:38:40.456082] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6370) on tqpair=0x1142ec0 00:24:08.694 [2024-06-11 09:38:40.456090] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.694 [2024-06-11 09:38:40.456094] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1142ec0) 00:24:08.694 [2024-06-11 09:38:40.456101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.694 [2024-06-11 09:38:40.456113] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6370, cid 4, qid 0 00:24:08.694 [2024-06-11 09:38:40.456332] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:08.694 [2024-06-11 09:38:40.456338] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:08.694 [2024-06-11 09:38:40.456342] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:08.694 [2024-06-11 09:38:40.456345] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1142ec0): datao=0, datal=8, cccid=4 00:24:08.694 [2024-06-11 09:38:40.456350] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11c6370) on tqpair(0x1142ec0): expected_datao=0, payload_size=8 00:24:08.694 [2024-06-11 09:38:40.456354] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.694 [2024-06-11 09:38:40.456360] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:08.694 [2024-06-11 09:38:40.456364] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:08.694 [2024-06-11 09:38:40.501324] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.694 [2024-06-11 09:38:40.501334] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.694 [2024-06-11 09:38:40.501340] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.694 [2024-06-11 09:38:40.501344] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6370) on tqpair=0x1142ec0 00:24:08.694 ===================================================== 00:24:08.694 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:08.694 ===================================================== 00:24:08.694 Controller Capabilities/Features 00:24:08.694 ================================ 00:24:08.694 Vendor ID: 0000 00:24:08.694 Subsystem Vendor ID: 0000 00:24:08.694 Serial Number: .................... 00:24:08.694 Model Number: ........................................ 00:24:08.694 Firmware Version: 24.09 00:24:08.694 Recommended Arb Burst: 0 00:24:08.694 IEEE OUI Identifier: 00 00 00 00:24:08.694 Multi-path I/O 00:24:08.694 May have multiple subsystem ports: No 00:24:08.694 May have multiple controllers: No 00:24:08.694 Associated with SR-IOV VF: No 00:24:08.694 Max Data Transfer Size: 131072 00:24:08.694 Max Number of Namespaces: 0 00:24:08.694 Max Number of I/O Queues: 1024 00:24:08.694 NVMe Specification Version (VS): 1.3 00:24:08.694 NVMe Specification Version (Identify): 1.3 00:24:08.694 Maximum Queue Entries: 128 00:24:08.694 Contiguous Queues Required: Yes 00:24:08.694 Arbitration Mechanisms Supported 00:24:08.694 Weighted Round Robin: Not Supported 00:24:08.694 Vendor Specific: Not Supported 00:24:08.694 Reset Timeout: 15000 ms 00:24:08.694 Doorbell Stride: 4 bytes 00:24:08.694 NVM Subsystem Reset: Not Supported 00:24:08.694 Command Sets Supported 00:24:08.694 NVM Command Set: Supported 00:24:08.694 Boot Partition: Not Supported 00:24:08.694 Memory Page Size Minimum: 4096 bytes 00:24:08.694 Memory Page Size Maximum: 4096 bytes 00:24:08.694 Persistent Memory Region: Not Supported 00:24:08.694 Optional Asynchronous Events Supported 00:24:08.694 Namespace Attribute Notices: Not Supported 00:24:08.694 Firmware Activation Notices: Not Supported 00:24:08.694 ANA Change Notices: Not Supported 00:24:08.694 PLE Aggregate Log Change Notices: Not Supported 00:24:08.694 LBA Status Info Alert Notices: Not Supported 00:24:08.694 EGE Aggregate Log Change Notices: Not Supported 00:24:08.694 Normal NVM Subsystem Shutdown event: Not Supported 00:24:08.694 Zone Descriptor Change Notices: Not Supported 00:24:08.694 Discovery Log Change Notices: Supported 00:24:08.694 Controller Attributes 00:24:08.694 128-bit Host Identifier: Not Supported 00:24:08.694 Non-Operational Permissive Mode: Not Supported 00:24:08.694 NVM Sets: Not Supported 00:24:08.694 Read Recovery Levels: Not Supported 00:24:08.694 Endurance Groups: Not Supported 00:24:08.694 Predictable Latency Mode: Not Supported 00:24:08.694 Traffic Based Keep ALive: Not Supported 00:24:08.694 Namespace Granularity: Not Supported 00:24:08.694 SQ Associations: Not Supported 00:24:08.694 UUID List: Not Supported 00:24:08.694 Multi-Domain Subsystem: Not Supported 00:24:08.694 Fixed Capacity Management: Not Supported 00:24:08.694 Variable Capacity Management: Not Supported 00:24:08.694 Delete Endurance Group: Not Supported 00:24:08.694 Delete NVM Set: Not Supported 00:24:08.694 Extended LBA Formats Supported: Not Supported 00:24:08.694 Flexible Data Placement Supported: Not Supported 00:24:08.694 00:24:08.694 Controller Memory Buffer Support 00:24:08.694 ================================ 00:24:08.694 Supported: No 00:24:08.694 00:24:08.694 Persistent Memory Region Support 00:24:08.694 ================================ 00:24:08.694 Supported: No 00:24:08.694 00:24:08.694 Admin Command Set Attributes 00:24:08.694 ============================ 00:24:08.694 Security Send/Receive: Not Supported 00:24:08.694 Format NVM: Not Supported 00:24:08.694 Firmware Activate/Download: Not Supported 00:24:08.694 Namespace Management: Not Supported 00:24:08.694 Device Self-Test: Not Supported 00:24:08.694 Directives: Not Supported 00:24:08.694 NVMe-MI: Not Supported 00:24:08.694 Virtualization Management: Not Supported 00:24:08.694 Doorbell Buffer Config: Not Supported 00:24:08.694 Get LBA Status Capability: Not Supported 00:24:08.694 Command & Feature Lockdown Capability: Not Supported 00:24:08.694 Abort Command Limit: 1 00:24:08.694 Async Event Request Limit: 4 00:24:08.694 Number of Firmware Slots: N/A 00:24:08.694 Firmware Slot 1 Read-Only: N/A 00:24:08.694 Firmware Activation Without Reset: N/A 00:24:08.694 Multiple Update Detection Support: N/A 00:24:08.694 Firmware Update Granularity: No Information Provided 00:24:08.694 Per-Namespace SMART Log: No 00:24:08.694 Asymmetric Namespace Access Log Page: Not Supported 00:24:08.694 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:08.694 Command Effects Log Page: Not Supported 00:24:08.694 Get Log Page Extended Data: Supported 00:24:08.694 Telemetry Log Pages: Not Supported 00:24:08.694 Persistent Event Log Pages: Not Supported 00:24:08.694 Supported Log Pages Log Page: May Support 00:24:08.694 Commands Supported & Effects Log Page: Not Supported 00:24:08.694 Feature Identifiers & Effects Log Page:May Support 00:24:08.694 NVMe-MI Commands & Effects Log Page: May Support 00:24:08.694 Data Area 4 for Telemetry Log: Not Supported 00:24:08.694 Error Log Page Entries Supported: 128 00:24:08.694 Keep Alive: Not Supported 00:24:08.694 00:24:08.694 NVM Command Set Attributes 00:24:08.694 ========================== 00:24:08.694 Submission Queue Entry Size 00:24:08.694 Max: 1 00:24:08.694 Min: 1 00:24:08.694 Completion Queue Entry Size 00:24:08.694 Max: 1 00:24:08.694 Min: 1 00:24:08.694 Number of Namespaces: 0 00:24:08.694 Compare Command: Not Supported 00:24:08.694 Write Uncorrectable Command: Not Supported 00:24:08.694 Dataset Management Command: Not Supported 00:24:08.694 Write Zeroes Command: Not Supported 00:24:08.694 Set Features Save Field: Not Supported 00:24:08.694 Reservations: Not Supported 00:24:08.694 Timestamp: Not Supported 00:24:08.694 Copy: Not Supported 00:24:08.694 Volatile Write Cache: Not Present 00:24:08.694 Atomic Write Unit (Normal): 1 00:24:08.694 Atomic Write Unit (PFail): 1 00:24:08.694 Atomic Compare & Write Unit: 1 00:24:08.694 Fused Compare & Write: Supported 00:24:08.694 Scatter-Gather List 00:24:08.694 SGL Command Set: Supported 00:24:08.694 SGL Keyed: Supported 00:24:08.694 SGL Bit Bucket Descriptor: Not Supported 00:24:08.694 SGL Metadata Pointer: Not Supported 00:24:08.694 Oversized SGL: Not Supported 00:24:08.694 SGL Metadata Address: Not Supported 00:24:08.694 SGL Offset: Supported 00:24:08.694 Transport SGL Data Block: Not Supported 00:24:08.694 Replay Protected Memory Block: Not Supported 00:24:08.694 00:24:08.694 Firmware Slot Information 00:24:08.694 ========================= 00:24:08.694 Active slot: 0 00:24:08.694 00:24:08.694 00:24:08.694 Error Log 00:24:08.694 ========= 00:24:08.694 00:24:08.694 Active Namespaces 00:24:08.694 ================= 00:24:08.694 Discovery Log Page 00:24:08.694 ================== 00:24:08.694 Generation Counter: 2 00:24:08.694 Number of Records: 2 00:24:08.694 Record Format: 0 00:24:08.694 00:24:08.694 Discovery Log Entry 0 00:24:08.694 ---------------------- 00:24:08.694 Transport Type: 3 (TCP) 00:24:08.694 Address Family: 1 (IPv4) 00:24:08.694 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:08.694 Entry Flags: 00:24:08.694 Duplicate Returned Information: 1 00:24:08.694 Explicit Persistent Connection Support for Discovery: 1 00:24:08.694 Transport Requirements: 00:24:08.694 Secure Channel: Not Required 00:24:08.694 Port ID: 0 (0x0000) 00:24:08.694 Controller ID: 65535 (0xffff) 00:24:08.694 Admin Max SQ Size: 128 00:24:08.694 Transport Service Identifier: 4420 00:24:08.694 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:08.695 Transport Address: 10.0.0.2 00:24:08.695 Discovery Log Entry 1 00:24:08.695 ---------------------- 00:24:08.695 Transport Type: 3 (TCP) 00:24:08.695 Address Family: 1 (IPv4) 00:24:08.695 Subsystem Type: 2 (NVM Subsystem) 00:24:08.695 Entry Flags: 00:24:08.695 Duplicate Returned Information: 0 00:24:08.695 Explicit Persistent Connection Support for Discovery: 0 00:24:08.695 Transport Requirements: 00:24:08.695 Secure Channel: Not Required 00:24:08.695 Port ID: 0 (0x0000) 00:24:08.695 Controller ID: 65535 (0xffff) 00:24:08.695 Admin Max SQ Size: 128 00:24:08.695 Transport Service Identifier: 4420 00:24:08.695 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:08.695 Transport Address: 10.0.0.2 [2024-06-11 09:38:40.501430] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:08.695 [2024-06-11 09:38:40.501442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.695 [2024-06-11 09:38:40.501449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.695 [2024-06-11 09:38:40.501455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.695 [2024-06-11 09:38:40.501461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.695 [2024-06-11 09:38:40.501469] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.501473] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.501477] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.695 [2024-06-11 09:38:40.501484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.695 [2024-06-11 09:38:40.501496] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.695 [2024-06-11 09:38:40.501633] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.695 [2024-06-11 09:38:40.501640] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.695 [2024-06-11 09:38:40.501643] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.501647] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.695 [2024-06-11 09:38:40.501657] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.501661] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.501664] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.695 [2024-06-11 09:38:40.501671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.695 [2024-06-11 09:38:40.501684] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.695 [2024-06-11 09:38:40.501930] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.695 [2024-06-11 09:38:40.501937] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.695 [2024-06-11 09:38:40.501940] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.501944] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.695 [2024-06-11 09:38:40.501949] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:08.695 [2024-06-11 09:38:40.501954] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:08.695 [2024-06-11 09:38:40.501963] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.501966] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.501970] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.695 [2024-06-11 09:38:40.501977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.695 [2024-06-11 09:38:40.501986] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.695 [2024-06-11 09:38:40.502187] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.695 [2024-06-11 09:38:40.502193] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.695 [2024-06-11 09:38:40.502196] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.502202] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.695 [2024-06-11 09:38:40.502212] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.502216] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.502220] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.695 [2024-06-11 09:38:40.502226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.695 [2024-06-11 09:38:40.502236] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.695 [2024-06-11 09:38:40.502435] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.695 [2024-06-11 09:38:40.502442] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.695 [2024-06-11 09:38:40.502445] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.502449] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.695 [2024-06-11 09:38:40.502459] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.502463] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.502466] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.695 [2024-06-11 09:38:40.502473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.695 [2024-06-11 09:38:40.502483] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.695 [2024-06-11 09:38:40.502739] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.695 [2024-06-11 09:38:40.502745] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.695 [2024-06-11 09:38:40.502748] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.502752] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.695 [2024-06-11 09:38:40.502762] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.502766] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.502769] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.695 [2024-06-11 09:38:40.502776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.695 [2024-06-11 09:38:40.502785] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.695 [2024-06-11 09:38:40.502989] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.695 [2024-06-11 09:38:40.502996] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.695 [2024-06-11 09:38:40.502999] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.503003] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.695 [2024-06-11 09:38:40.503013] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.503017] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.503020] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.695 [2024-06-11 09:38:40.503027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.695 [2024-06-11 09:38:40.503036] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.695 [2024-06-11 09:38:40.503252] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.695 [2024-06-11 09:38:40.503258] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.695 [2024-06-11 09:38:40.503262] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.503269] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.695 [2024-06-11 09:38:40.503279] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.503283] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.503286] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.695 [2024-06-11 09:38:40.503293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.695 [2024-06-11 09:38:40.503303] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.695 [2024-06-11 09:38:40.503543] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.695 [2024-06-11 09:38:40.503549] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.695 [2024-06-11 09:38:40.503553] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.503556] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.695 [2024-06-11 09:38:40.503567] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.503570] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.695 [2024-06-11 09:38:40.503574] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.695 [2024-06-11 09:38:40.503581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.695 [2024-06-11 09:38:40.503590] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.978 [2024-06-11 09:38:40.503847] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.978 [2024-06-11 09:38:40.503855] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.978 [2024-06-11 09:38:40.503860] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.978 [2024-06-11 09:38:40.503864] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.978 [2024-06-11 09:38:40.503875] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.978 [2024-06-11 09:38:40.503878] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.978 [2024-06-11 09:38:40.503882] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.978 [2024-06-11 09:38:40.503889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.978 [2024-06-11 09:38:40.503898] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.978 [2024-06-11 09:38:40.504149] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.978 [2024-06-11 09:38:40.504156] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.978 [2024-06-11 09:38:40.504160] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.978 [2024-06-11 09:38:40.504163] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.978 [2024-06-11 09:38:40.504173] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.978 [2024-06-11 09:38:40.504177] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.978 [2024-06-11 09:38:40.504181] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.978 [2024-06-11 09:38:40.504188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.978 [2024-06-11 09:38:40.504197] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.978 [2024-06-11 09:38:40.504393] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.978 [2024-06-11 09:38:40.504401] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.978 [2024-06-11 09:38:40.504404] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.978 [2024-06-11 09:38:40.504408] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.978 [2024-06-11 09:38:40.504421] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.978 [2024-06-11 09:38:40.504425] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.978 [2024-06-11 09:38:40.504428] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.978 [2024-06-11 09:38:40.504435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.978 [2024-06-11 09:38:40.504444] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.979 [2024-06-11 09:38:40.504702] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.979 [2024-06-11 09:38:40.504709] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.979 [2024-06-11 09:38:40.504712] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.504716] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.979 [2024-06-11 09:38:40.504726] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.504729] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.504733] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.979 [2024-06-11 09:38:40.504739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.979 [2024-06-11 09:38:40.504749] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.979 [2024-06-11 09:38:40.505006] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.979 [2024-06-11 09:38:40.505013] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.979 [2024-06-11 09:38:40.505016] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.505020] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.979 [2024-06-11 09:38:40.505030] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.505034] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.505037] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.979 [2024-06-11 09:38:40.505044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.979 [2024-06-11 09:38:40.505053] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.979 [2024-06-11 09:38:40.505256] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.979 [2024-06-11 09:38:40.505263] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.979 [2024-06-11 09:38:40.505266] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.505270] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.979 [2024-06-11 09:38:40.505280] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.505284] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.505287] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1142ec0) 00:24:08.979 [2024-06-11 09:38:40.505294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.979 [2024-06-11 09:38:40.505303] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c6210, cid 3, qid 0 00:24:08.979 [2024-06-11 09:38:40.509322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.979 [2024-06-11 09:38:40.509330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.979 [2024-06-11 09:38:40.509334] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.509337] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11c6210) on tqpair=0x1142ec0 00:24:08.979 [2024-06-11 09:38:40.509346] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:24:08.979 00:24:08.979 09:38:40 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:08.979 [2024-06-11 09:38:40.546662] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:24:08.979 [2024-06-11 09:38:40.546710] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238833 ] 00:24:08.979 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.979 [2024-06-11 09:38:40.579845] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:08.979 [2024-06-11 09:38:40.579885] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:08.979 [2024-06-11 09:38:40.579890] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:08.979 [2024-06-11 09:38:40.579901] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:08.979 [2024-06-11 09:38:40.579909] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:08.979 [2024-06-11 09:38:40.583355] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:08.979 [2024-06-11 09:38:40.583384] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xbebec0 0 00:24:08.979 [2024-06-11 09:38:40.591323] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:08.979 [2024-06-11 09:38:40.591334] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:08.979 [2024-06-11 09:38:40.591338] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:08.979 [2024-06-11 09:38:40.591341] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:08.979 [2024-06-11 09:38:40.591374] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.591380] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.591384] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbebec0) 00:24:08.979 [2024-06-11 09:38:40.591395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:08.979 [2024-06-11 09:38:40.591411] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6edf0, cid 0, qid 0 00:24:08.979 [2024-06-11 09:38:40.599328] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.979 [2024-06-11 09:38:40.599338] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.979 [2024-06-11 09:38:40.599342] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.599346] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6edf0) on tqpair=0xbebec0 00:24:08.979 [2024-06-11 09:38:40.599354] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:08.979 [2024-06-11 09:38:40.599363] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:08.979 [2024-06-11 09:38:40.599368] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:08.979 [2024-06-11 09:38:40.599381] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.599385] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.599388] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbebec0) 00:24:08.979 [2024-06-11 09:38:40.599396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.979 [2024-06-11 09:38:40.599412] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6edf0, cid 0, qid 0 00:24:08.979 [2024-06-11 09:38:40.599613] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.979 [2024-06-11 09:38:40.599620] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.979 [2024-06-11 09:38:40.599624] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.599627] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6edf0) on tqpair=0xbebec0 00:24:08.979 [2024-06-11 09:38:40.599632] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:08.979 [2024-06-11 09:38:40.599641] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:08.979 [2024-06-11 09:38:40.599649] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.979 [2024-06-11 09:38:40.599653] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.599657] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbebec0) 00:24:08.980 [2024-06-11 09:38:40.599663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.980 [2024-06-11 09:38:40.599674] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6edf0, cid 0, qid 0 00:24:08.980 [2024-06-11 09:38:40.599808] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.980 [2024-06-11 09:38:40.599814] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.980 [2024-06-11 09:38:40.599818] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.599821] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6edf0) on tqpair=0xbebec0 00:24:08.980 [2024-06-11 09:38:40.599826] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:08.980 [2024-06-11 09:38:40.599834] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:08.980 [2024-06-11 09:38:40.599840] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.599844] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.599848] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbebec0) 00:24:08.980 [2024-06-11 09:38:40.599854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.980 [2024-06-11 09:38:40.599864] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6edf0, cid 0, qid 0 00:24:08.980 [2024-06-11 09:38:40.600025] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.980 [2024-06-11 09:38:40.600031] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.980 [2024-06-11 09:38:40.600034] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.600038] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6edf0) on tqpair=0xbebec0 00:24:08.980 [2024-06-11 09:38:40.600043] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:08.980 [2024-06-11 09:38:40.600053] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.600060] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.600063] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbebec0) 00:24:08.980 [2024-06-11 09:38:40.600070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.980 [2024-06-11 09:38:40.600080] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6edf0, cid 0, qid 0 00:24:08.980 [2024-06-11 09:38:40.600258] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.980 [2024-06-11 09:38:40.600268] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.980 [2024-06-11 09:38:40.600271] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.600275] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6edf0) on tqpair=0xbebec0 00:24:08.980 [2024-06-11 09:38:40.600279] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:08.980 [2024-06-11 09:38:40.600284] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:08.980 [2024-06-11 09:38:40.600291] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:08.980 [2024-06-11 09:38:40.600400] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:08.980 [2024-06-11 09:38:40.600405] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:08.980 [2024-06-11 09:38:40.600412] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.600416] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.600420] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbebec0) 00:24:08.980 [2024-06-11 09:38:40.600426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.980 [2024-06-11 09:38:40.600437] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6edf0, cid 0, qid 0 00:24:08.980 [2024-06-11 09:38:40.600669] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.980 [2024-06-11 09:38:40.600676] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.980 [2024-06-11 09:38:40.600679] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.600683] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6edf0) on tqpair=0xbebec0 00:24:08.980 [2024-06-11 09:38:40.600688] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:08.980 [2024-06-11 09:38:40.600697] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.600704] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.600707] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbebec0) 00:24:08.980 [2024-06-11 09:38:40.600714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.980 [2024-06-11 09:38:40.600723] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6edf0, cid 0, qid 0 00:24:08.980 [2024-06-11 09:38:40.600858] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.980 [2024-06-11 09:38:40.600865] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.980 [2024-06-11 09:38:40.600868] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.600872] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6edf0) on tqpair=0xbebec0 00:24:08.980 [2024-06-11 09:38:40.600876] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:08.980 [2024-06-11 09:38:40.600880] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:08.980 [2024-06-11 09:38:40.600891] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:08.980 [2024-06-11 09:38:40.600899] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:08.980 [2024-06-11 09:38:40.600908] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.600911] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbebec0) 00:24:08.980 [2024-06-11 09:38:40.600920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.980 [2024-06-11 09:38:40.600930] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6edf0, cid 0, qid 0 00:24:08.980 [2024-06-11 09:38:40.601124] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:08.980 [2024-06-11 09:38:40.601131] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:08.980 [2024-06-11 09:38:40.601134] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.601140] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbebec0): datao=0, datal=4096, cccid=0 00:24:08.980 [2024-06-11 09:38:40.601147] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6edf0) on tqpair(0xbebec0): expected_datao=0, payload_size=4096 00:24:08.980 [2024-06-11 09:38:40.601151] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.601159] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.601162] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.641508] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.980 [2024-06-11 09:38:40.641521] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.980 [2024-06-11 09:38:40.641524] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.980 [2024-06-11 09:38:40.641529] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6edf0) on tqpair=0xbebec0 00:24:08.980 [2024-06-11 09:38:40.641540] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:08.981 [2024-06-11 09:38:40.641545] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:08.981 [2024-06-11 09:38:40.641549] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:08.981 [2024-06-11 09:38:40.641557] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:08.981 [2024-06-11 09:38:40.641561] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:08.981 [2024-06-11 09:38:40.641566] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:08.981 [2024-06-11 09:38:40.641574] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:08.981 [2024-06-11 09:38:40.641581] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.641589] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.641592] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbebec0) 00:24:08.981 [2024-06-11 09:38:40.641600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:08.981 [2024-06-11 09:38:40.641612] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6edf0, cid 0, qid 0 00:24:08.981 [2024-06-11 09:38:40.641802] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.981 [2024-06-11 09:38:40.641808] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.981 [2024-06-11 09:38:40.641812] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.641815] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6edf0) on tqpair=0xbebec0 00:24:08.981 [2024-06-11 09:38:40.641822] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.641826] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.641831] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xbebec0) 00:24:08.981 [2024-06-11 09:38:40.641840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.981 [2024-06-11 09:38:40.641848] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.641852] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.641856] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xbebec0) 00:24:08.981 [2024-06-11 09:38:40.641861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.981 [2024-06-11 09:38:40.641867] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.641871] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.641874] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xbebec0) 00:24:08.981 [2024-06-11 09:38:40.641880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.981 [2024-06-11 09:38:40.641886] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.641890] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.641893] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbebec0) 00:24:08.981 [2024-06-11 09:38:40.641899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.981 [2024-06-11 09:38:40.641903] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:08.981 [2024-06-11 09:38:40.641914] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:08.981 [2024-06-11 09:38:40.641923] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.641926] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbebec0) 00:24:08.981 [2024-06-11 09:38:40.641933] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.981 [2024-06-11 09:38:40.641945] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6edf0, cid 0, qid 0 00:24:08.981 [2024-06-11 09:38:40.641950] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ef50, cid 1, qid 0 00:24:08.981 [2024-06-11 09:38:40.641955] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f0b0, cid 2, qid 0 00:24:08.981 [2024-06-11 09:38:40.641959] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f210, cid 3, qid 0 00:24:08.981 [2024-06-11 09:38:40.641964] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f370, cid 4, qid 0 00:24:08.981 [2024-06-11 09:38:40.642210] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.981 [2024-06-11 09:38:40.642217] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.981 [2024-06-11 09:38:40.642220] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.642224] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f370) on tqpair=0xbebec0 00:24:08.981 [2024-06-11 09:38:40.642228] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:08.981 [2024-06-11 09:38:40.642233] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:08.981 [2024-06-11 09:38:40.642243] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:08.981 [2024-06-11 09:38:40.642249] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:08.981 [2024-06-11 09:38:40.642255] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.642259] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.642264] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbebec0) 00:24:08.981 [2024-06-11 09:38:40.642271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:08.981 [2024-06-11 09:38:40.642281] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f370, cid 4, qid 0 00:24:08.981 [2024-06-11 09:38:40.642379] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.981 [2024-06-11 09:38:40.642385] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.981 [2024-06-11 09:38:40.642389] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.642392] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f370) on tqpair=0xbebec0 00:24:08.981 [2024-06-11 09:38:40.642446] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:08.981 [2024-06-11 09:38:40.642457] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:08.981 [2024-06-11 09:38:40.642467] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.642470] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbebec0) 00:24:08.981 [2024-06-11 09:38:40.642477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.981 [2024-06-11 09:38:40.642487] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f370, cid 4, qid 0 00:24:08.981 [2024-06-11 09:38:40.642643] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:08.981 [2024-06-11 09:38:40.642654] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:08.981 [2024-06-11 09:38:40.642660] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.642666] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbebec0): datao=0, datal=4096, cccid=4 00:24:08.981 [2024-06-11 09:38:40.642674] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6f370) on tqpair(0xbebec0): expected_datao=0, payload_size=4096 00:24:08.981 [2024-06-11 09:38:40.642682] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.642696] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:08.981 [2024-06-11 09:38:40.642700] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.687325] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.982 [2024-06-11 09:38:40.687337] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.982 [2024-06-11 09:38:40.687341] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.687345] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f370) on tqpair=0xbebec0 00:24:08.982 [2024-06-11 09:38:40.687355] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:08.982 [2024-06-11 09:38:40.687371] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:08.982 [2024-06-11 09:38:40.687382] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:08.982 [2024-06-11 09:38:40.687393] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.687396] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbebec0) 00:24:08.982 [2024-06-11 09:38:40.687404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.982 [2024-06-11 09:38:40.687416] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f370, cid 4, qid 0 00:24:08.982 [2024-06-11 09:38:40.687642] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:08.982 [2024-06-11 09:38:40.687649] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:08.982 [2024-06-11 09:38:40.687655] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.687659] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbebec0): datao=0, datal=4096, cccid=4 00:24:08.982 [2024-06-11 09:38:40.687663] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6f370) on tqpair(0xbebec0): expected_datao=0, payload_size=4096 00:24:08.982 [2024-06-11 09:38:40.687668] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.687707] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.687713] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.728514] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.982 [2024-06-11 09:38:40.728524] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.982 [2024-06-11 09:38:40.728530] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.728534] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f370) on tqpair=0xbebec0 00:24:08.982 [2024-06-11 09:38:40.728547] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:08.982 [2024-06-11 09:38:40.728556] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:08.982 [2024-06-11 09:38:40.728564] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.728568] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbebec0) 00:24:08.982 [2024-06-11 09:38:40.728574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.982 [2024-06-11 09:38:40.728589] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f370, cid 4, qid 0 00:24:08.982 [2024-06-11 09:38:40.728814] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:08.982 [2024-06-11 09:38:40.728821] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:08.982 [2024-06-11 09:38:40.728824] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.728828] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbebec0): datao=0, datal=4096, cccid=4 00:24:08.982 [2024-06-11 09:38:40.728832] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6f370) on tqpair(0xbebec0): expected_datao=0, payload_size=4096 00:24:08.982 [2024-06-11 09:38:40.728836] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.728880] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.728885] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.770514] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.982 [2024-06-11 09:38:40.770527] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.982 [2024-06-11 09:38:40.770531] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.770535] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f370) on tqpair=0xbebec0 00:24:08.982 [2024-06-11 09:38:40.770546] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:08.982 [2024-06-11 09:38:40.770554] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:08.982 [2024-06-11 09:38:40.770563] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:08.982 [2024-06-11 09:38:40.770569] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:08.982 [2024-06-11 09:38:40.770574] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:08.982 [2024-06-11 09:38:40.770581] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:08.982 [2024-06-11 09:38:40.770587] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:08.982 [2024-06-11 09:38:40.770595] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:08.982 [2024-06-11 09:38:40.770611] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.770615] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbebec0) 00:24:08.982 [2024-06-11 09:38:40.770623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.982 [2024-06-11 09:38:40.770629] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.770633] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.770636] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbebec0) 00:24:08.982 [2024-06-11 09:38:40.770643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.982 [2024-06-11 09:38:40.770657] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f370, cid 4, qid 0 00:24:08.982 [2024-06-11 09:38:40.770662] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f4d0, cid 5, qid 0 00:24:08.982 [2024-06-11 09:38:40.770882] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.982 [2024-06-11 09:38:40.770890] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.982 [2024-06-11 09:38:40.770893] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.770897] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f370) on tqpair=0xbebec0 00:24:08.982 [2024-06-11 09:38:40.770904] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.982 [2024-06-11 09:38:40.770913] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.982 [2024-06-11 09:38:40.770917] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.770920] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f4d0) on tqpair=0xbebec0 00:24:08.982 [2024-06-11 09:38:40.770929] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.770933] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbebec0) 00:24:08.982 [2024-06-11 09:38:40.770939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.982 [2024-06-11 09:38:40.770950] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f4d0, cid 5, qid 0 00:24:08.982 [2024-06-11 09:38:40.771095] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:08.982 [2024-06-11 09:38:40.771102] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:08.982 [2024-06-11 09:38:40.771105] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:08.982 [2024-06-11 09:38:40.771109] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f4d0) on tqpair=0xbebec0 00:24:08.983 [2024-06-11 09:38:40.771118] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:08.983 [2024-06-11 09:38:40.771121] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbebec0) 00:24:08.983 [2024-06-11 09:38:40.771130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:08.983 [2024-06-11 09:38:40.771140] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f4d0, cid 5, qid 0 00:24:09.261 [2024-06-11 09:38:40.775325] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.261 [2024-06-11 09:38:40.775334] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.261 [2024-06-11 09:38:40.775338] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.775345] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f4d0) on tqpair=0xbebec0 00:24:09.261 [2024-06-11 09:38:40.775355] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.775360] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbebec0) 00:24:09.261 [2024-06-11 09:38:40.775366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.261 [2024-06-11 09:38:40.775377] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f4d0, cid 5, qid 0 00:24:09.261 [2024-06-11 09:38:40.775573] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.261 [2024-06-11 09:38:40.775580] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.261 [2024-06-11 09:38:40.775584] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.775591] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f4d0) on tqpair=0xbebec0 00:24:09.261 [2024-06-11 09:38:40.775603] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.775607] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xbebec0) 00:24:09.261 [2024-06-11 09:38:40.775614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.261 [2024-06-11 09:38:40.775621] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.775625] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xbebec0) 00:24:09.261 [2024-06-11 09:38:40.775631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.261 [2024-06-11 09:38:40.775640] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.775644] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xbebec0) 00:24:09.261 [2024-06-11 09:38:40.775650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.261 [2024-06-11 09:38:40.775657] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.775661] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xbebec0) 00:24:09.261 [2024-06-11 09:38:40.775667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.261 [2024-06-11 09:38:40.775678] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f4d0, cid 5, qid 0 00:24:09.261 [2024-06-11 09:38:40.775683] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f370, cid 4, qid 0 00:24:09.261 [2024-06-11 09:38:40.775688] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f630, cid 6, qid 0 00:24:09.261 [2024-06-11 09:38:40.775692] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f790, cid 7, qid 0 00:24:09.261 [2024-06-11 09:38:40.775928] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.261 [2024-06-11 09:38:40.775939] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.261 [2024-06-11 09:38:40.775945] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.775951] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbebec0): datao=0, datal=8192, cccid=5 00:24:09.261 [2024-06-11 09:38:40.775959] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6f4d0) on tqpair(0xbebec0): expected_datao=0, payload_size=8192 00:24:09.261 [2024-06-11 09:38:40.775966] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776014] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776019] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776027] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.261 [2024-06-11 09:38:40.776032] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.261 [2024-06-11 09:38:40.776036] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776041] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbebec0): datao=0, datal=512, cccid=4 00:24:09.261 [2024-06-11 09:38:40.776048] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6f370) on tqpair(0xbebec0): expected_datao=0, payload_size=512 00:24:09.261 [2024-06-11 09:38:40.776052] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776058] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776061] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776067] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.261 [2024-06-11 09:38:40.776074] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.261 [2024-06-11 09:38:40.776080] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776087] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbebec0): datao=0, datal=512, cccid=6 00:24:09.261 [2024-06-11 09:38:40.776094] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6f630) on tqpair(0xbebec0): expected_datao=0, payload_size=512 00:24:09.261 [2024-06-11 09:38:40.776102] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776112] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776117] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776123] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:09.261 [2024-06-11 09:38:40.776129] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:09.261 [2024-06-11 09:38:40.776132] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776135] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xbebec0): datao=0, datal=4096, cccid=7 00:24:09.261 [2024-06-11 09:38:40.776139] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6f790) on tqpair(0xbebec0): expected_datao=0, payload_size=4096 00:24:09.261 [2024-06-11 09:38:40.776143] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776150] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776153] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:09.261 [2024-06-11 09:38:40.776160] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.262 [2024-06-11 09:38:40.776166] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.262 [2024-06-11 09:38:40.776169] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.262 [2024-06-11 09:38:40.776173] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f4d0) on tqpair=0xbebec0 00:24:09.262 [2024-06-11 09:38:40.776186] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.262 [2024-06-11 09:38:40.776191] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.262 [2024-06-11 09:38:40.776195] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.262 [2024-06-11 09:38:40.776198] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f370) on tqpair=0xbebec0 00:24:09.262 [2024-06-11 09:38:40.776207] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.262 [2024-06-11 09:38:40.776213] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.262 [2024-06-11 09:38:40.776216] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.262 [2024-06-11 09:38:40.776220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f630) on tqpair=0xbebec0 00:24:09.262 [2024-06-11 09:38:40.776228] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.262 [2024-06-11 09:38:40.776234] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.262 [2024-06-11 09:38:40.776237] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.262 [2024-06-11 09:38:40.776242] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f790) on tqpair=0xbebec0 00:24:09.262 ===================================================== 00:24:09.262 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.262 ===================================================== 00:24:09.262 Controller Capabilities/Features 00:24:09.262 ================================ 00:24:09.262 Vendor ID: 8086 00:24:09.262 Subsystem Vendor ID: 8086 00:24:09.262 Serial Number: SPDK00000000000001 00:24:09.262 Model Number: SPDK bdev Controller 00:24:09.262 Firmware Version: 24.09 00:24:09.262 Recommended Arb Burst: 6 00:24:09.262 IEEE OUI Identifier: e4 d2 5c 00:24:09.262 Multi-path I/O 00:24:09.262 May have multiple subsystem ports: Yes 00:24:09.262 May have multiple controllers: Yes 00:24:09.262 Associated with SR-IOV VF: No 00:24:09.262 Max Data Transfer Size: 131072 00:24:09.262 Max Number of Namespaces: 32 00:24:09.262 Max Number of I/O Queues: 127 00:24:09.262 NVMe Specification Version (VS): 1.3 00:24:09.262 NVMe Specification Version (Identify): 1.3 00:24:09.262 Maximum Queue Entries: 128 00:24:09.262 Contiguous Queues Required: Yes 00:24:09.262 Arbitration Mechanisms Supported 00:24:09.262 Weighted Round Robin: Not Supported 00:24:09.262 Vendor Specific: Not Supported 00:24:09.262 Reset Timeout: 15000 ms 00:24:09.262 Doorbell Stride: 4 bytes 00:24:09.262 NVM Subsystem Reset: Not Supported 00:24:09.262 Command Sets Supported 00:24:09.262 NVM Command Set: Supported 00:24:09.262 Boot Partition: Not Supported 00:24:09.262 Memory Page Size Minimum: 4096 bytes 00:24:09.262 Memory Page Size Maximum: 4096 bytes 00:24:09.262 Persistent Memory Region: Not Supported 00:24:09.262 Optional Asynchronous Events Supported 00:24:09.262 Namespace Attribute Notices: Supported 00:24:09.262 Firmware Activation Notices: Not Supported 00:24:09.262 ANA Change Notices: Not Supported 00:24:09.262 PLE Aggregate Log Change Notices: Not Supported 00:24:09.262 LBA Status Info Alert Notices: Not Supported 00:24:09.262 EGE Aggregate Log Change Notices: Not Supported 00:24:09.262 Normal NVM Subsystem Shutdown event: Not Supported 00:24:09.262 Zone Descriptor Change Notices: Not Supported 00:24:09.262 Discovery Log Change Notices: Not Supported 00:24:09.262 Controller Attributes 00:24:09.262 128-bit Host Identifier: Supported 00:24:09.262 Non-Operational Permissive Mode: Not Supported 00:24:09.262 NVM Sets: Not Supported 00:24:09.262 Read Recovery Levels: Not Supported 00:24:09.262 Endurance Groups: Not Supported 00:24:09.262 Predictable Latency Mode: Not Supported 00:24:09.262 Traffic Based Keep ALive: Not Supported 00:24:09.262 Namespace Granularity: Not Supported 00:24:09.262 SQ Associations: Not Supported 00:24:09.262 UUID List: Not Supported 00:24:09.262 Multi-Domain Subsystem: Not Supported 00:24:09.262 Fixed Capacity Management: Not Supported 00:24:09.262 Variable Capacity Management: Not Supported 00:24:09.262 Delete Endurance Group: Not Supported 00:24:09.262 Delete NVM Set: Not Supported 00:24:09.262 Extended LBA Formats Supported: Not Supported 00:24:09.262 Flexible Data Placement Supported: Not Supported 00:24:09.262 00:24:09.262 Controller Memory Buffer Support 00:24:09.262 ================================ 00:24:09.262 Supported: No 00:24:09.262 00:24:09.262 Persistent Memory Region Support 00:24:09.262 ================================ 00:24:09.262 Supported: No 00:24:09.262 00:24:09.262 Admin Command Set Attributes 00:24:09.262 ============================ 00:24:09.262 Security Send/Receive: Not Supported 00:24:09.262 Format NVM: Not Supported 00:24:09.262 Firmware Activate/Download: Not Supported 00:24:09.262 Namespace Management: Not Supported 00:24:09.262 Device Self-Test: Not Supported 00:24:09.262 Directives: Not Supported 00:24:09.262 NVMe-MI: Not Supported 00:24:09.262 Virtualization Management: Not Supported 00:24:09.262 Doorbell Buffer Config: Not Supported 00:24:09.262 Get LBA Status Capability: Not Supported 00:24:09.262 Command & Feature Lockdown Capability: Not Supported 00:24:09.262 Abort Command Limit: 4 00:24:09.262 Async Event Request Limit: 4 00:24:09.262 Number of Firmware Slots: N/A 00:24:09.262 Firmware Slot 1 Read-Only: N/A 00:24:09.262 Firmware Activation Without Reset: N/A 00:24:09.262 Multiple Update Detection Support: N/A 00:24:09.262 Firmware Update Granularity: No Information Provided 00:24:09.262 Per-Namespace SMART Log: No 00:24:09.262 Asymmetric Namespace Access Log Page: Not Supported 00:24:09.262 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:09.262 Command Effects Log Page: Supported 00:24:09.262 Get Log Page Extended Data: Supported 00:24:09.262 Telemetry Log Pages: Not Supported 00:24:09.262 Persistent Event Log Pages: Not Supported 00:24:09.262 Supported Log Pages Log Page: May Support 00:24:09.262 Commands Supported & Effects Log Page: Not Supported 00:24:09.262 Feature Identifiers & Effects Log Page:May Support 00:24:09.262 NVMe-MI Commands & Effects Log Page: May Support 00:24:09.262 Data Area 4 for Telemetry Log: Not Supported 00:24:09.262 Error Log Page Entries Supported: 128 00:24:09.262 Keep Alive: Supported 00:24:09.262 Keep Alive Granularity: 10000 ms 00:24:09.262 00:24:09.262 NVM Command Set Attributes 00:24:09.262 ========================== 00:24:09.262 Submission Queue Entry Size 00:24:09.262 Max: 64 00:24:09.262 Min: 64 00:24:09.262 Completion Queue Entry Size 00:24:09.262 Max: 16 00:24:09.262 Min: 16 00:24:09.262 Number of Namespaces: 32 00:24:09.262 Compare Command: Supported 00:24:09.262 Write Uncorrectable Command: Not Supported 00:24:09.262 Dataset Management Command: Supported 00:24:09.262 Write Zeroes Command: Supported 00:24:09.262 Set Features Save Field: Not Supported 00:24:09.262 Reservations: Supported 00:24:09.262 Timestamp: Not Supported 00:24:09.262 Copy: Supported 00:24:09.262 Volatile Write Cache: Present 00:24:09.262 Atomic Write Unit (Normal): 1 00:24:09.262 Atomic Write Unit (PFail): 1 00:24:09.262 Atomic Compare & Write Unit: 1 00:24:09.262 Fused Compare & Write: Supported 00:24:09.262 Scatter-Gather List 00:24:09.262 SGL Command Set: Supported 00:24:09.262 SGL Keyed: Supported 00:24:09.262 SGL Bit Bucket Descriptor: Not Supported 00:24:09.262 SGL Metadata Pointer: Not Supported 00:24:09.262 Oversized SGL: Not Supported 00:24:09.262 SGL Metadata Address: Not Supported 00:24:09.262 SGL Offset: Supported 00:24:09.262 Transport SGL Data Block: Not Supported 00:24:09.262 Replay Protected Memory Block: Not Supported 00:24:09.262 00:24:09.262 Firmware Slot Information 00:24:09.262 ========================= 00:24:09.262 Active slot: 1 00:24:09.262 Slot 1 Firmware Revision: 24.09 00:24:09.262 00:24:09.262 00:24:09.262 Commands Supported and Effects 00:24:09.262 ============================== 00:24:09.262 Admin Commands 00:24:09.262 -------------- 00:24:09.262 Get Log Page (02h): Supported 00:24:09.262 Identify (06h): Supported 00:24:09.262 Abort (08h): Supported 00:24:09.262 Set Features (09h): Supported 00:24:09.262 Get Features (0Ah): Supported 00:24:09.262 Asynchronous Event Request (0Ch): Supported 00:24:09.262 Keep Alive (18h): Supported 00:24:09.262 I/O Commands 00:24:09.262 ------------ 00:24:09.262 Flush (00h): Supported LBA-Change 00:24:09.262 Write (01h): Supported LBA-Change 00:24:09.262 Read (02h): Supported 00:24:09.262 Compare (05h): Supported 00:24:09.262 Write Zeroes (08h): Supported LBA-Change 00:24:09.262 Dataset Management (09h): Supported LBA-Change 00:24:09.262 Copy (19h): Supported LBA-Change 00:24:09.262 Unknown (79h): Supported LBA-Change 00:24:09.262 Unknown (7Ah): Supported 00:24:09.262 00:24:09.262 Error Log 00:24:09.262 ========= 00:24:09.263 00:24:09.263 Arbitration 00:24:09.263 =========== 00:24:09.263 Arbitration Burst: 1 00:24:09.263 00:24:09.263 Power Management 00:24:09.263 ================ 00:24:09.263 Number of Power States: 1 00:24:09.263 Current Power State: Power State #0 00:24:09.263 Power State #0: 00:24:09.263 Max Power: 0.00 W 00:24:09.263 Non-Operational State: Operational 00:24:09.263 Entry Latency: Not Reported 00:24:09.263 Exit Latency: Not Reported 00:24:09.263 Relative Read Throughput: 0 00:24:09.263 Relative Read Latency: 0 00:24:09.263 Relative Write Throughput: 0 00:24:09.263 Relative Write Latency: 0 00:24:09.263 Idle Power: Not Reported 00:24:09.263 Active Power: Not Reported 00:24:09.263 Non-Operational Permissive Mode: Not Supported 00:24:09.263 00:24:09.263 Health Information 00:24:09.263 ================== 00:24:09.263 Critical Warnings: 00:24:09.263 Available Spare Space: OK 00:24:09.263 Temperature: OK 00:24:09.263 Device Reliability: OK 00:24:09.263 Read Only: No 00:24:09.263 Volatile Memory Backup: OK 00:24:09.263 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:09.263 Temperature Threshold: [2024-06-11 09:38:40.776349] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.776355] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xbebec0) 00:24:09.263 [2024-06-11 09:38:40.776362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.263 [2024-06-11 09:38:40.776374] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f790, cid 7, qid 0 00:24:09.263 [2024-06-11 09:38:40.776592] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.263 [2024-06-11 09:38:40.776599] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.263 [2024-06-11 09:38:40.776602] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.776606] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f790) on tqpair=0xbebec0 00:24:09.263 [2024-06-11 09:38:40.776636] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:09.263 [2024-06-11 09:38:40.776647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.263 [2024-06-11 09:38:40.776654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.263 [2024-06-11 09:38:40.776660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.263 [2024-06-11 09:38:40.776666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.263 [2024-06-11 09:38:40.776678] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.776682] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.776685] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbebec0) 00:24:09.263 [2024-06-11 09:38:40.776692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.263 [2024-06-11 09:38:40.776703] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f210, cid 3, qid 0 00:24:09.263 [2024-06-11 09:38:40.776809] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.263 [2024-06-11 09:38:40.776815] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.263 [2024-06-11 09:38:40.776819] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.776822] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f210) on tqpair=0xbebec0 00:24:09.263 [2024-06-11 09:38:40.776829] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.776833] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.776838] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbebec0) 00:24:09.263 [2024-06-11 09:38:40.776847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.263 [2024-06-11 09:38:40.776859] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f210, cid 3, qid 0 00:24:09.263 [2024-06-11 09:38:40.777063] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.263 [2024-06-11 09:38:40.777070] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.263 [2024-06-11 09:38:40.777073] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.777077] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f210) on tqpair=0xbebec0 00:24:09.263 [2024-06-11 09:38:40.777081] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:09.263 [2024-06-11 09:38:40.777085] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:09.263 [2024-06-11 09:38:40.777097] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.777101] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.777107] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbebec0) 00:24:09.263 [2024-06-11 09:38:40.777115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.263 [2024-06-11 09:38:40.777125] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f210, cid 3, qid 0 00:24:09.263 [2024-06-11 09:38:40.777294] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.263 [2024-06-11 09:38:40.777301] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.263 [2024-06-11 09:38:40.777304] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.777308] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f210) on tqpair=0xbebec0 00:24:09.263 [2024-06-11 09:38:40.777324] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.777330] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.777333] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbebec0) 00:24:09.263 [2024-06-11 09:38:40.777340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.263 [2024-06-11 09:38:40.777350] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f210, cid 3, qid 0 00:24:09.263 [2024-06-11 09:38:40.777529] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.263 [2024-06-11 09:38:40.777535] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.263 [2024-06-11 09:38:40.777539] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.777542] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f210) on tqpair=0xbebec0 00:24:09.263 [2024-06-11 09:38:40.777552] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.777559] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.777562] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbebec0) 00:24:09.263 [2024-06-11 09:38:40.777569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.263 [2024-06-11 09:38:40.777578] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f210, cid 3, qid 0 00:24:09.263 [2024-06-11 09:38:40.777808] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.263 [2024-06-11 09:38:40.777814] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.263 [2024-06-11 09:38:40.777818] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.777821] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f210) on tqpair=0xbebec0 00:24:09.263 [2024-06-11 09:38:40.777831] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.777838] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.777841] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbebec0) 00:24:09.263 [2024-06-11 09:38:40.777848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.263 [2024-06-11 09:38:40.777858] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f210, cid 3, qid 0 00:24:09.263 [2024-06-11 09:38:40.778062] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.263 [2024-06-11 09:38:40.778069] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.263 [2024-06-11 09:38:40.778073] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.778076] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f210) on tqpair=0xbebec0 00:24:09.263 [2024-06-11 09:38:40.778086] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.778092] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.778098] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbebec0) 00:24:09.263 [2024-06-11 09:38:40.778105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.263 [2024-06-11 09:38:40.778115] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f210, cid 3, qid 0 00:24:09.263 [2024-06-11 09:38:40.778338] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.263 [2024-06-11 09:38:40.778345] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.263 [2024-06-11 09:38:40.778348] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.778352] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f210) on tqpair=0xbebec0 00:24:09.263 [2024-06-11 09:38:40.778362] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.778367] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.778371] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbebec0) 00:24:09.263 [2024-06-11 09:38:40.778377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.263 [2024-06-11 09:38:40.778387] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f210, cid 3, qid 0 00:24:09.263 [2024-06-11 09:38:40.778649] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.263 [2024-06-11 09:38:40.778656] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.263 [2024-06-11 09:38:40.778659] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.778663] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f210) on tqpair=0xbebec0 00:24:09.263 [2024-06-11 09:38:40.778673] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.778678] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.263 [2024-06-11 09:38:40.778682] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbebec0) 00:24:09.263 [2024-06-11 09:38:40.778688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.264 [2024-06-11 09:38:40.778698] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f210, cid 3, qid 0 00:24:09.264 [2024-06-11 09:38:40.778810] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.264 [2024-06-11 09:38:40.778816] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.264 [2024-06-11 09:38:40.778820] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.264 [2024-06-11 09:38:40.778823] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f210) on tqpair=0xbebec0 00:24:09.264 [2024-06-11 09:38:40.778834] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.264 [2024-06-11 09:38:40.778838] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.264 [2024-06-11 09:38:40.778842] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbebec0) 00:24:09.264 [2024-06-11 09:38:40.778849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.264 [2024-06-11 09:38:40.778858] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f210, cid 3, qid 0 00:24:09.264 [2024-06-11 09:38:40.779013] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.264 [2024-06-11 09:38:40.779020] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.264 [2024-06-11 09:38:40.779023] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.264 [2024-06-11 09:38:40.779027] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f210) on tqpair=0xbebec0 00:24:09.264 [2024-06-11 09:38:40.779037] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.264 [2024-06-11 09:38:40.779041] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.264 [2024-06-11 09:38:40.779047] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbebec0) 00:24:09.264 [2024-06-11 09:38:40.779053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.264 [2024-06-11 09:38:40.779063] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f210, cid 3, qid 0 00:24:09.264 [2024-06-11 09:38:40.779272] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.264 [2024-06-11 09:38:40.779278] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.264 [2024-06-11 09:38:40.779281] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.264 [2024-06-11 09:38:40.779285] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f210) on tqpair=0xbebec0 00:24:09.264 [2024-06-11 09:38:40.779295] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:09.264 [2024-06-11 09:38:40.779300] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:09.264 [2024-06-11 09:38:40.779304] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xbebec0) 00:24:09.264 [2024-06-11 09:38:40.779310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.264 [2024-06-11 09:38:40.783357] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6f210, cid 3, qid 0 00:24:09.264 [2024-06-11 09:38:40.783545] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:09.264 [2024-06-11 09:38:40.783551] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:09.264 [2024-06-11 09:38:40.783554] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:09.264 [2024-06-11 09:38:40.783558] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6f210) on tqpair=0xbebec0 00:24:09.264 [2024-06-11 09:38:40.783566] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:09.264 0 Kelvin (-273 Celsius) 00:24:09.264 Available Spare: 0% 00:24:09.264 Available Spare Threshold: 0% 00:24:09.264 Life Percentage Used: 0% 00:24:09.264 Data Units Read: 0 00:24:09.264 Data Units Written: 0 00:24:09.264 Host Read Commands: 0 00:24:09.264 Host Write Commands: 0 00:24:09.264 Controller Busy Time: 0 minutes 00:24:09.264 Power Cycles: 0 00:24:09.264 Power On Hours: 0 hours 00:24:09.264 Unsafe Shutdowns: 0 00:24:09.264 Unrecoverable Media Errors: 0 00:24:09.264 Lifetime Error Log Entries: 0 00:24:09.264 Warning Temperature Time: 0 minutes 00:24:09.264 Critical Temperature Time: 0 minutes 00:24:09.264 00:24:09.264 Number of Queues 00:24:09.264 ================ 00:24:09.264 Number of I/O Submission Queues: 127 00:24:09.264 Number of I/O Completion Queues: 127 00:24:09.264 00:24:09.264 Active Namespaces 00:24:09.264 ================= 00:24:09.264 Namespace ID:1 00:24:09.264 Error Recovery Timeout: Unlimited 00:24:09.264 Command Set Identifier: NVM (00h) 00:24:09.264 Deallocate: Supported 00:24:09.264 Deallocated/Unwritten Error: Not Supported 00:24:09.264 Deallocated Read Value: Unknown 00:24:09.264 Deallocate in Write Zeroes: Not Supported 00:24:09.264 Deallocated Guard Field: 0xFFFF 00:24:09.264 Flush: Supported 00:24:09.264 Reservation: Supported 00:24:09.264 Namespace Sharing Capabilities: Multiple Controllers 00:24:09.264 Size (in LBAs): 131072 (0GiB) 00:24:09.264 Capacity (in LBAs): 131072 (0GiB) 00:24:09.264 Utilization (in LBAs): 131072 (0GiB) 00:24:09.264 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:09.264 EUI64: ABCDEF0123456789 00:24:09.264 UUID: e2cf41d8-6465-48dc-84c2-95b0cc62a651 00:24:09.264 Thin Provisioning: Not Supported 00:24:09.264 Per-NS Atomic Units: Yes 00:24:09.264 Atomic Boundary Size (Normal): 0 00:24:09.264 Atomic Boundary Size (PFail): 0 00:24:09.264 Atomic Boundary Offset: 0 00:24:09.264 Maximum Single Source Range Length: 65535 00:24:09.264 Maximum Copy Length: 65535 00:24:09.264 Maximum Source Range Count: 1 00:24:09.264 NGUID/EUI64 Never Reused: No 00:24:09.264 Namespace Write Protected: No 00:24:09.264 Number of LBA Formats: 1 00:24:09.264 Current LBA Format: LBA Format #00 00:24:09.264 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:09.264 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:09.264 rmmod nvme_tcp 00:24:09.264 rmmod nvme_fabrics 00:24:09.264 rmmod nvme_keyring 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1238481 ']' 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1238481 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 1238481 ']' 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 1238481 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1238481 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1238481' 00:24:09.264 killing process with pid 1238481 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 1238481 00:24:09.264 09:38:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 1238481 00:24:09.526 09:38:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:09.526 09:38:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:09.526 09:38:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:09.526 09:38:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:09.526 09:38:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:09.526 09:38:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.526 09:38:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.526 09:38:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.448 09:38:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:11.448 00:24:11.448 real 0m11.124s 00:24:11.448 user 0m8.666s 00:24:11.448 sys 0m5.723s 00:24:11.448 09:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:11.448 09:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:11.448 ************************************ 00:24:11.448 END TEST nvmf_identify 00:24:11.448 ************************************ 00:24:11.448 09:38:43 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:11.448 09:38:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:11.448 09:38:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:11.448 09:38:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:11.448 ************************************ 00:24:11.448 START TEST nvmf_perf 00:24:11.448 ************************************ 00:24:11.448 09:38:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:11.710 * Looking for test storage... 00:24:11.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:11.710 09:38:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:18.301 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:18.302 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:18.302 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:18.302 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:18.302 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.302 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.564 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.564 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.564 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:18.564 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.564 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.564 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.564 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:18.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:24:18.564 00:24:18.564 --- 10.0.0.2 ping statistics --- 00:24:18.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.564 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:24:18.564 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:24:18.825 00:24:18.825 --- 10.0.0.1 ping statistics --- 00:24:18.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.825 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:24:18.825 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1242865 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1242865 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 1242865 ']' 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:18.826 09:38:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:18.826 [2024-06-11 09:38:50.467210] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:24:18.826 [2024-06-11 09:38:50.467271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.826 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.826 [2024-06-11 09:38:50.556362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.087 [2024-06-11 09:38:50.655815] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.087 [2024-06-11 09:38:50.655870] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.087 [2024-06-11 09:38:50.655879] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.087 [2024-06-11 09:38:50.655887] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.087 [2024-06-11 09:38:50.655894] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.087 [2024-06-11 09:38:50.656052] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.087 [2024-06-11 09:38:50.656164] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.087 [2024-06-11 09:38:50.656353] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:19.087 [2024-06-11 09:38:50.656417] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.660 09:38:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:19.660 09:38:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:24:19.660 09:38:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:19.660 09:38:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:19.660 09:38:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:19.660 09:38:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.660 09:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:19.660 09:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:20.232 09:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:20.232 09:38:51 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:20.492 09:38:52 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:20.492 09:38:52 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:20.752 09:38:52 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:20.752 09:38:52 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:20.753 09:38:52 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:20.753 09:38:52 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:20.753 09:38:52 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:20.753 [2024-06-11 09:38:52.504510] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.753 09:38:52 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:21.013 09:38:52 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:21.013 09:38:52 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:21.274 09:38:52 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:21.274 09:38:52 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:21.534 09:38:53 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.794 [2024-06-11 09:38:53.379743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.794 09:38:53 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:22.053 09:38:53 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:22.053 09:38:53 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:22.053 09:38:53 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:22.053 09:38:53 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:23.438 Initializing NVMe Controllers 00:24:23.438 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:23.438 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:23.438 Initialization complete. Launching workers. 00:24:23.438 ======================================================== 00:24:23.438 Latency(us) 00:24:23.438 Device Information : IOPS MiB/s Average min max 00:24:23.438 PCIE (0000:65:00.0) NSID 1 from core 0: 79383.61 310.09 402.61 13.35 4778.63 00:24:23.438 ======================================================== 00:24:23.439 Total : 79383.61 310.09 402.61 13.35 4778.63 00:24:23.439 00:24:23.439 09:38:54 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.439 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.823 Initializing NVMe Controllers 00:24:24.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:24.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:24.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:24.823 Initialization complete. Launching workers. 00:24:24.823 ======================================================== 00:24:24.823 Latency(us) 00:24:24.823 Device Information : IOPS MiB/s Average min max 00:24:24.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.00 0.32 12199.87 363.01 45818.01 00:24:24.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 86.00 0.34 11681.42 6989.67 47888.38 00:24:24.823 ======================================================== 00:24:24.823 Total : 169.00 0.66 11936.04 363.01 47888.38 00:24:24.823 00:24:24.823 09:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:24.823 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.207 Initializing NVMe Controllers 00:24:26.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:26.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:26.207 Initialization complete. Launching workers. 00:24:26.207 ======================================================== 00:24:26.207 Latency(us) 00:24:26.207 Device Information : IOPS MiB/s Average min max 00:24:26.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8739.62 34.14 3678.22 381.38 7410.57 00:24:26.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3705.84 14.48 8675.38 5456.64 16146.30 00:24:26.207 ======================================================== 00:24:26.208 Total : 12445.45 48.62 5166.20 381.38 16146.30 00:24:26.208 00:24:26.208 09:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:26.208 09:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:26.208 09:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:26.208 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.753 Initializing NVMe Controllers 00:24:28.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.753 Controller IO queue size 128, less than required. 00:24:28.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:28.753 Controller IO queue size 128, less than required. 00:24:28.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:28.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:28.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:28.753 Initialization complete. Launching workers. 00:24:28.753 ======================================================== 00:24:28.753 Latency(us) 00:24:28.753 Device Information : IOPS MiB/s Average min max 00:24:28.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1078.71 269.68 121631.74 66203.59 180793.93 00:24:28.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 611.27 152.82 223076.67 81693.44 305830.62 00:24:28.753 ======================================================== 00:24:28.753 Total : 1689.97 422.49 158324.59 66203.59 305830.62 00:24:28.753 00:24:28.753 09:39:00 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:28.753 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.753 No valid NVMe controllers or AIO or URING devices found 00:24:28.753 Initializing NVMe Controllers 00:24:28.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.753 Controller IO queue size 128, less than required. 00:24:28.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:28.753 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:28.753 Controller IO queue size 128, less than required. 00:24:28.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:28.753 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:28.753 WARNING: Some requested NVMe devices were skipped 00:24:28.753 09:39:00 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:28.753 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.366 Initializing NVMe Controllers 00:24:31.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:31.366 Controller IO queue size 128, less than required. 00:24:31.366 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.366 Controller IO queue size 128, less than required. 00:24:31.366 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:31.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:31.366 Initialization complete. Launching workers. 00:24:31.366 00:24:31.366 ==================== 00:24:31.366 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:31.366 TCP transport: 00:24:31.366 polls: 32537 00:24:31.366 idle_polls: 11698 00:24:31.366 sock_completions: 20839 00:24:31.366 nvme_completions: 4355 00:24:31.366 submitted_requests: 6574 00:24:31.366 queued_requests: 1 00:24:31.366 00:24:31.366 ==================== 00:24:31.366 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:31.366 TCP transport: 00:24:31.366 polls: 32625 00:24:31.366 idle_polls: 11128 00:24:31.366 sock_completions: 21497 00:24:31.366 nvme_completions: 4363 00:24:31.366 submitted_requests: 6568 00:24:31.366 queued_requests: 1 00:24:31.366 ======================================================== 00:24:31.366 Latency(us) 00:24:31.366 Device Information : IOPS MiB/s Average min max 00:24:31.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1088.48 272.12 120891.51 63585.07 201686.81 00:24:31.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1090.48 272.62 119524.87 51119.19 204506.28 00:24:31.366 ======================================================== 00:24:31.366 Total : 2178.97 544.74 120207.56 51119.19 204506.28 00:24:31.366 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:31.366 rmmod nvme_tcp 00:24:31.366 rmmod nvme_fabrics 00:24:31.366 rmmod nvme_keyring 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1242865 ']' 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1242865 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 1242865 ']' 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 1242865 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:31.366 09:39:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1242865 00:24:31.366 09:39:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:31.366 09:39:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:31.366 09:39:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1242865' 00:24:31.366 killing process with pid 1242865 00:24:31.366 09:39:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 1242865 00:24:31.366 09:39:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 1242865 00:24:33.278 09:39:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:33.278 09:39:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:33.278 09:39:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:33.278 09:39:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.278 09:39:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:33.278 09:39:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.278 09:39:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.278 09:39:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.823 09:39:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:35.823 00:24:35.823 real 0m23.852s 00:24:35.823 user 0m59.485s 00:24:35.823 sys 0m7.671s 00:24:35.823 09:39:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:35.823 09:39:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:35.823 ************************************ 00:24:35.823 END TEST nvmf_perf 00:24:35.823 ************************************ 00:24:35.823 09:39:07 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:35.823 09:39:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:35.823 09:39:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:35.823 09:39:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:35.823 ************************************ 00:24:35.823 START TEST nvmf_fio_host 00:24:35.823 ************************************ 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:35.823 * Looking for test storage... 00:24:35.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.823 09:39:07 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:35.824 09:39:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:42.415 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:42.416 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:42.416 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:42.416 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:42.416 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.416 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:42.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:24:42.677 00:24:42.677 --- 10.0.0.2 ping statistics --- 00:24:42.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.677 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:24:42.677 00:24:42.677 --- 10.0.0.1 ping statistics --- 00:24:42.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.677 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:42.677 09:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.938 09:39:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1250464 00:24:42.938 09:39:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:42.938 09:39:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:42.938 09:39:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1250464 00:24:42.938 09:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 1250464 ']' 00:24:42.938 09:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.938 09:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:42.938 09:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.938 09:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:42.938 09:39:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.938 [2024-06-11 09:39:14.546638] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:24:42.939 [2024-06-11 09:39:14.546685] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.939 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.939 [2024-06-11 09:39:14.631052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:42.939 [2024-06-11 09:39:14.695787] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.939 [2024-06-11 09:39:14.695821] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.939 [2024-06-11 09:39:14.695829] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.939 [2024-06-11 09:39:14.695835] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.939 [2024-06-11 09:39:14.695841] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.939 [2024-06-11 09:39:14.695889] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.939 [2024-06-11 09:39:14.695972] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.939 [2024-06-11 09:39:14.696127] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.939 [2024-06-11 09:39:14.696128] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.894 09:39:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:43.894 09:39:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:24:43.894 09:39:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:43.894 [2024-06-11 09:39:15.605701] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.894 09:39:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:43.894 09:39:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:43.894 09:39:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.894 09:39:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:44.155 Malloc1 00:24:44.155 09:39:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:44.416 09:39:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:44.677 09:39:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.677 [2024-06-11 09:39:16.491816] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.937 09:39:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:44.937 09:39:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:44.937 09:39:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:44.937 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:44.937 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:24:44.937 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:44.937 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:24:44.937 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.937 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:24:44.937 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:24:44.937 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:44.937 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:44.937 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:24:44.937 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:45.221 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:45.221 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:45.221 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:45.221 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:45.221 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:24:45.221 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:45.221 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:45.221 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:45.221 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:45.221 09:39:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:45.491 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:45.491 fio-3.35 00:24:45.491 Starting 1 thread 00:24:45.491 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.063 00:24:48.063 test: (groupid=0, jobs=1): err= 0: pid=1251309: Tue Jun 11 09:39:19 2024 00:24:48.063 read: IOPS=9809, BW=38.3MiB/s (40.2MB/s)(76.9MiB/2006msec) 00:24:48.063 slat (usec): min=2, max=285, avg= 2.22, stdev= 2.79 00:24:48.063 clat (usec): min=3682, max=12186, avg=7186.39, stdev=523.72 00:24:48.063 lat (usec): min=3716, max=12188, avg=7188.61, stdev=523.61 00:24:48.063 clat percentiles (usec): 00:24:48.063 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6783], 00:24:48.063 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7308], 00:24:48.063 | 70.00th=[ 7439], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 7963], 00:24:48.063 | 99.00th=[ 8356], 99.50th=[ 8455], 99.90th=[10028], 99.95th=[11338], 00:24:48.063 | 99.99th=[12125] 00:24:48.063 bw ( KiB/s): min=38296, max=39728, per=99.96%, avg=39224.00, stdev=674.88, samples=4 00:24:48.063 iops : min= 9574, max= 9932, avg=9806.00, stdev=168.72, samples=4 00:24:48.063 write: IOPS=9825, BW=38.4MiB/s (40.2MB/s)(77.0MiB/2006msec); 0 zone resets 00:24:48.063 slat (usec): min=2, max=274, avg= 2.31, stdev= 2.15 00:24:48.063 clat (usec): min=2901, max=11317, avg=5779.86, stdev=450.83 00:24:48.063 lat (usec): min=2919, max=11319, avg=5782.17, stdev=450.81 00:24:48.063 clat percentiles (usec): 00:24:48.063 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5473], 00:24:48.063 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5866], 00:24:48.063 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6259], 95.00th=[ 6456], 00:24:48.063 | 99.00th=[ 6783], 99.50th=[ 6915], 99.90th=[ 9896], 99.95th=[10945], 00:24:48.063 | 99.99th=[11338] 00:24:48.063 bw ( KiB/s): min=38856, max=39936, per=99.99%, avg=39298.00, stdev=461.90, samples=4 00:24:48.063 iops : min= 9714, max= 9984, avg=9824.50, stdev=115.47, samples=4 00:24:48.063 lat (msec) : 4=0.06%, 10=99.85%, 20=0.10% 00:24:48.063 cpu : usr=70.07%, sys=27.43%, ctx=50, majf=0, minf=6 00:24:48.063 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:48.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:48.063 issued rwts: total=19678,19710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.063 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:48.063 00:24:48.063 Run status group 0 (all jobs): 00:24:48.063 READ: bw=38.3MiB/s (40.2MB/s), 38.3MiB/s-38.3MiB/s (40.2MB/s-40.2MB/s), io=76.9MiB (80.6MB), run=2006-2006msec 00:24:48.063 WRITE: bw=38.4MiB/s (40.2MB/s), 38.4MiB/s-38.4MiB/s (40.2MB/s-40.2MB/s), io=77.0MiB (80.7MB), run=2006-2006msec 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:48.063 09:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:48.328 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:48.328 fio-3.35 00:24:48.328 Starting 1 thread 00:24:48.328 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.874 00:24:50.874 test: (groupid=0, jobs=1): err= 0: pid=1251827: Tue Jun 11 09:39:22 2024 00:24:50.874 read: IOPS=9008, BW=141MiB/s (148MB/s)(282MiB/2005msec) 00:24:50.874 slat (usec): min=3, max=112, avg= 3.67, stdev= 1.71 00:24:50.874 clat (usec): min=2565, max=16476, avg=8762.53, stdev=2268.79 00:24:50.874 lat (usec): min=2569, max=16479, avg=8766.20, stdev=2269.01 00:24:50.874 clat percentiles (usec): 00:24:50.874 | 1.00th=[ 4113], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 6718], 00:24:50.874 | 30.00th=[ 7439], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9241], 00:24:50.874 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11731], 95.00th=[12387], 00:24:50.874 | 99.00th=[14484], 99.50th=[15008], 99.90th=[15926], 99.95th=[16057], 00:24:50.874 | 99.99th=[16450] 00:24:50.874 bw ( KiB/s): min=61472, max=78944, per=49.41%, avg=71216.00, stdev=7281.65, samples=4 00:24:50.874 iops : min= 3842, max= 4934, avg=4451.00, stdev=455.10, samples=4 00:24:50.874 write: IOPS=5171, BW=80.8MiB/s (84.7MB/s)(145MiB/1795msec); 0 zone resets 00:24:50.874 slat (usec): min=40, max=442, avg=41.39, stdev= 9.57 00:24:50.874 clat (usec): min=2793, max=17126, avg=9582.23, stdev=1727.59 00:24:50.874 lat (usec): min=2833, max=17265, avg=9623.63, stdev=1730.27 00:24:50.874 clat percentiles (usec): 00:24:50.874 | 1.00th=[ 6194], 5.00th=[ 7046], 10.00th=[ 7504], 20.00th=[ 8160], 00:24:50.874 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:24:50.874 | 70.00th=[10290], 80.00th=[10945], 90.00th=[11731], 95.00th=[12518], 00:24:50.874 | 99.00th=[14615], 99.50th=[15795], 99.90th=[16450], 99.95th=[16712], 00:24:50.874 | 99.99th=[17171] 00:24:50.874 bw ( KiB/s): min=63360, max=81760, per=89.13%, avg=73752.00, stdev=7817.76, samples=4 00:24:50.874 iops : min= 3960, max= 5110, avg=4609.50, stdev=488.61, samples=4 00:24:50.874 lat (msec) : 4=0.65%, 10=67.22%, 20=32.13% 00:24:50.874 cpu : usr=83.83%, sys=13.62%, ctx=17, majf=0, minf=15 00:24:50.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:50.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:50.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:50.874 issued rwts: total=18062,9283,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:50.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:50.874 00:24:50.874 Run status group 0 (all jobs): 00:24:50.874 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=282MiB (296MB), run=2005-2005msec 00:24:50.874 WRITE: bw=80.8MiB/s (84.7MB/s), 80.8MiB/s-80.8MiB/s (84.7MB/s-84.7MB/s), io=145MiB (152MB), run=1795-1795msec 00:24:50.874 09:39:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.874 09:39:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:50.874 09:39:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:50.874 09:39:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:50.874 09:39:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:50.874 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:50.874 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:50.874 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:50.874 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:50.874 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.874 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:50.874 rmmod nvme_tcp 00:24:50.874 rmmod nvme_fabrics 00:24:50.874 rmmod nvme_keyring 00:24:50.874 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:50.874 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:50.874 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:50.875 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1250464 ']' 00:24:50.875 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1250464 00:24:50.875 09:39:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 1250464 ']' 00:24:50.875 09:39:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 1250464 00:24:50.875 09:39:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:24:50.875 09:39:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:50.875 09:39:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1250464 00:24:50.875 09:39:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:50.875 09:39:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:50.875 09:39:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1250464' 00:24:50.875 killing process with pid 1250464 00:24:50.875 09:39:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 1250464 00:24:50.875 09:39:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 1250464 00:24:51.135 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:51.136 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:51.136 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:51.136 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.136 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:51.136 09:39:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.136 09:39:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.136 09:39:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.082 09:39:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:53.082 00:24:53.082 real 0m17.612s 00:24:53.082 user 1m10.679s 00:24:53.082 sys 0m7.419s 00:24:53.082 09:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:53.082 09:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.082 ************************************ 00:24:53.082 END TEST nvmf_fio_host 00:24:53.082 ************************************ 00:24:53.082 09:39:24 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:53.082 09:39:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:53.082 09:39:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:53.082 09:39:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.352 ************************************ 00:24:53.352 START TEST nvmf_failover 00:24:53.352 ************************************ 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:53.352 * Looking for test storage... 00:24:53.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:53.352 09:39:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:53.352 09:39:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:59.944 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:59.944 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:59.944 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:59.944 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:59.944 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.945 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.945 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:59.945 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:59.945 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.206 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.206 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.206 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.206 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.206 09:39:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.206 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.206 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:25:00.466 00:25:00.466 --- 10.0.0.2 ping statistics --- 00:25:00.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.466 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:25:00.466 00:25:00.466 --- 10.0.0.1 ping statistics --- 00:25:00.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.466 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1256482 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1256482 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1256482 ']' 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:00.466 09:39:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.466 [2024-06-11 09:39:32.144542] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:25:00.466 [2024-06-11 09:39:32.144605] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.466 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.466 [2024-06-11 09:39:32.214845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:00.726 [2024-06-11 09:39:32.288203] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.726 [2024-06-11 09:39:32.288239] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.726 [2024-06-11 09:39:32.288247] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.726 [2024-06-11 09:39:32.288253] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.726 [2024-06-11 09:39:32.288259] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.726 [2024-06-11 09:39:32.288389] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.726 [2024-06-11 09:39:32.288548] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.726 [2024-06-11 09:39:32.288549] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:25:00.726 09:39:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:00.726 09:39:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:25:00.726 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.726 09:39:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:00.726 09:39:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.726 09:39:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.726 09:39:32 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:00.987 [2024-06-11 09:39:32.606783] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.987 09:39:32 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:01.247 Malloc0 00:25:01.247 09:39:32 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:01.508 09:39:33 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:01.508 09:39:33 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.769 [2024-06-11 09:39:33.494352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.769 09:39:33 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:02.029 [2024-06-11 09:39:33.702882] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:02.029 09:39:33 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:02.290 [2024-06-11 09:39:33.919534] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:02.290 09:39:33 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1256845 00:25:02.290 09:39:33 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:02.290 09:39:33 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:02.290 09:39:33 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1256845 /var/tmp/bdevperf.sock 00:25:02.290 09:39:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1256845 ']' 00:25:02.290 09:39:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.290 09:39:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:02.290 09:39:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.290 09:39:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:02.290 09:39:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:02.551 09:39:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:02.551 09:39:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:25:02.551 09:39:34 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:02.813 NVMe0n1 00:25:02.813 09:39:34 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.385 00:25:03.385 09:39:35 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1257163 00:25:03.385 09:39:35 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:03.385 09:39:35 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:04.326 09:39:36 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.587 [2024-06-11 09:39:36.214121] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238b9e0 is same with the state(5) to be set 00:25:04.587 [2024-06-11 09:39:36.214162] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238b9e0 is same with the state(5) to be set 00:25:04.587 [2024-06-11 09:39:36.214168] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238b9e0 is same with the state(5) to be set 00:25:04.587 [2024-06-11 09:39:36.214173] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238b9e0 is same with the state(5) to be set 00:25:04.587 [2024-06-11 09:39:36.214178] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238b9e0 is same with the state(5) to be set 00:25:04.587 [2024-06-11 09:39:36.214188] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238b9e0 is same with the state(5) to be set 00:25:04.587 [2024-06-11 09:39:36.214192] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238b9e0 is same with the state(5) to be set 00:25:04.587 [2024-06-11 09:39:36.214197] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238b9e0 is same with the state(5) to be set 00:25:04.587 [2024-06-11 09:39:36.214201] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238b9e0 is same with the state(5) to be set 00:25:04.587 [2024-06-11 09:39:36.214205] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238b9e0 is same with the state(5) to be set 00:25:04.587 [2024-06-11 09:39:36.214210] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238b9e0 is same with the state(5) to be set 00:25:04.587 [2024-06-11 09:39:36.214214] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238b9e0 is same with the state(5) to be set 00:25:04.587 [2024-06-11 09:39:36.214218] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238b9e0 is same with the state(5) to be set 00:25:04.587 09:39:36 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:07.890 09:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:07.890 00:25:07.890 09:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:08.149 [2024-06-11 09:39:39.731284] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d0e0 is same with the state(5) to be set 00:25:08.149 [2024-06-11 09:39:39.731325] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d0e0 is same with the state(5) to be set 00:25:08.149 [2024-06-11 09:39:39.731334] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d0e0 is same with the state(5) to be set 00:25:08.149 [2024-06-11 09:39:39.731340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d0e0 is same with the state(5) to be set 00:25:08.149 [2024-06-11 09:39:39.731346] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d0e0 is same with the state(5) to be set 00:25:08.149 [2024-06-11 09:39:39.731353] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d0e0 is same with the state(5) to be set 00:25:08.149 [2024-06-11 09:39:39.731359] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d0e0 is same with the state(5) to be set 00:25:08.149 [2024-06-11 09:39:39.731366] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d0e0 is same with the state(5) to be set 00:25:08.149 [2024-06-11 09:39:39.731372] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d0e0 is same with the state(5) to be set 00:25:08.149 [2024-06-11 09:39:39.731378] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d0e0 is same with the state(5) to be set 00:25:08.149 [2024-06-11 09:39:39.731384] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d0e0 is same with the state(5) to be set 00:25:08.149 [2024-06-11 09:39:39.731391] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d0e0 is same with the state(5) to be set 00:25:08.149 [2024-06-11 09:39:39.731397] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d0e0 is same with the state(5) to be set 00:25:08.149 [2024-06-11 09:39:39.731403] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d0e0 is same with the state(5) to be set 00:25:08.149 09:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:11.448 09:39:42 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.448 [2024-06-11 09:39:42.958554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.448 09:39:42 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:12.389 09:39:43 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:12.389 [2024-06-11 09:39:44.186488] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186522] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186530] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186537] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186543] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186550] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186557] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186563] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186569] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186575] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186581] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186587] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186594] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186606] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186612] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186618] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186624] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186631] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186638] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186644] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186650] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186656] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186662] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186674] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186680] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186686] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186692] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186698] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186705] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186711] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186717] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186730] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186737] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186743] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186749] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186755] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186761] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.389 [2024-06-11 09:39:44.186768] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.390 [2024-06-11 09:39:44.186774] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.390 [2024-06-11 09:39:44.186783] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.390 [2024-06-11 09:39:44.186789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.390 [2024-06-11 09:39:44.186797] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.390 [2024-06-11 09:39:44.186803] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.390 [2024-06-11 09:39:44.186810] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.390 [2024-06-11 09:39:44.186816] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.390 [2024-06-11 09:39:44.186823] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238d7c0 is same with the state(5) to be set 00:25:12.650 09:39:44 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1257163 00:25:19.288 0 00:25:19.288 09:39:50 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1256845 00:25:19.288 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1256845 ']' 00:25:19.288 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1256845 00:25:19.288 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:19.288 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:19.288 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1256845 00:25:19.288 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:19.288 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:19.288 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1256845' 00:25:19.288 killing process with pid 1256845 00:25:19.288 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1256845 00:25:19.288 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1256845 00:25:19.288 09:39:50 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:19.288 [2024-06-11 09:39:33.996767] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:25:19.288 [2024-06-11 09:39:33.996818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256845 ] 00:25:19.288 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.288 [2024-06-11 09:39:34.072023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.288 [2024-06-11 09:39:34.136308] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.288 Running I/O for 15 seconds... 00:25:19.288 [2024-06-11 09:39:36.214887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.288 [2024-06-11 09:39:36.214922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.288 [2024-06-11 09:39:36.214939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.288 [2024-06-11 09:39:36.214947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.288 [2024-06-11 09:39:36.214957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.288 [2024-06-11 09:39:36.214965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.288 [2024-06-11 09:39:36.214974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.288 [2024-06-11 09:39:36.214982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.288 [2024-06-11 09:39:36.214991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.288 [2024-06-11 09:39:36.214998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.288 [2024-06-11 09:39:36.215007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.288 [2024-06-11 09:39:36.215014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.288 [2024-06-11 09:39:36.215024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.288 [2024-06-11 09:39:36.215031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.288 [2024-06-11 09:39:36.215040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.288 [2024-06-11 09:39:36.215047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.288 [2024-06-11 09:39:36.215056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.288 [2024-06-11 09:39:36.215063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.289 [2024-06-11 09:39:36.215681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.289 [2024-06-11 09:39:36.215690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.290 [2024-06-11 09:39:36.215893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.290 [2024-06-11 09:39:36.215909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.290 [2024-06-11 09:39:36.215925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.290 [2024-06-11 09:39:36.215942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.215984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.215992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.290 [2024-06-11 09:39:36.216334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.290 [2024-06-11 09:39:36.216341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.291 [2024-06-11 09:39:36.216833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.291 [2024-06-11 09:39:36.216848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.291 [2024-06-11 09:39:36.216864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.291 [2024-06-11 09:39:36.216880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.291 [2024-06-11 09:39:36.216896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.291 [2024-06-11 09:39:36.216913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.291 [2024-06-11 09:39:36.216928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.291 [2024-06-11 09:39:36.216944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.291 [2024-06-11 09:39:36.216960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.291 [2024-06-11 09:39:36.216969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.292 [2024-06-11 09:39:36.216976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:36.216984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.292 [2024-06-11 09:39:36.216991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:36.217002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.292 [2024-06-11 09:39:36.217009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:36.217018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.292 [2024-06-11 09:39:36.217025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:36.217034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.292 [2024-06-11 09:39:36.217041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:36.217059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.292 [2024-06-11 09:39:36.217066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.292 [2024-06-11 09:39:36.217072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100320 len:8 PRP1 0x0 PRP2 0x0 00:25:19.292 [2024-06-11 09:39:36.217079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:36.217116] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c21e50 was disconnected and freed. reset controller. 00:25:19.292 [2024-06-11 09:39:36.217126] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:19.292 [2024-06-11 09:39:36.217144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.292 [2024-06-11 09:39:36.217152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:36.217161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.292 [2024-06-11 09:39:36.217168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:36.217176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.292 [2024-06-11 09:39:36.217183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:36.217190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.292 [2024-06-11 09:39:36.217197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:36.217204] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.292 [2024-06-11 09:39:36.220806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.292 [2024-06-11 09:39:36.220831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c03140 (9): Bad file descriptor 00:25:19.292 [2024-06-11 09:39:36.381363] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:19.292 [2024-06-11 09:39:39.732060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.292 [2024-06-11 09:39:39.732097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.292 [2024-06-11 09:39:39.732488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.292 [2024-06-11 09:39:39.732495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.293 [2024-06-11 09:39:39.732924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.293 [2024-06-11 09:39:39.732941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.293 [2024-06-11 09:39:39.732958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.293 [2024-06-11 09:39:39.732974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.732985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.293 [2024-06-11 09:39:39.732992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.733001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.293 [2024-06-11 09:39:39.733008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.733016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.293 [2024-06-11 09:39:39.733024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.293 [2024-06-11 09:39:39.733033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.294 [2024-06-11 09:39:39.733453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.294 [2024-06-11 09:39:39.733680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.294 [2024-06-11 09:39:39.733689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.295 [2024-06-11 09:39:39.733973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.295 [2024-06-11 09:39:39.733990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.733999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.295 [2024-06-11 09:39:39.734006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.295 [2024-06-11 09:39:39.734023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.295 [2024-06-11 09:39:39.734040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.295 [2024-06-11 09:39:39.734056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.295 [2024-06-11 09:39:39.734071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.295 [2024-06-11 09:39:39.734088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.295 [2024-06-11 09:39:39.734104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.295 [2024-06-11 09:39:39.734119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.295 [2024-06-11 09:39:39.734135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.295 [2024-06-11 09:39:39.734151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.295 [2024-06-11 09:39:39.734167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.295 [2024-06-11 09:39:39.734182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.295 [2024-06-11 09:39:39.734199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.295 [2024-06-11 09:39:39.734223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.295 [2024-06-11 09:39:39.734232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23984 len:8 PRP1 0x0 PRP2 0x0 00:25:19.295 [2024-06-11 09:39:39.734239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734273] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c23e80 was disconnected and freed. reset controller. 00:25:19.295 [2024-06-11 09:39:39.734282] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:19.295 [2024-06-11 09:39:39.734301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.295 [2024-06-11 09:39:39.734310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.295 [2024-06-11 09:39:39.734331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.295 [2024-06-11 09:39:39.734350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.295 [2024-06-11 09:39:39.734366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.295 [2024-06-11 09:39:39.734373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.295 [2024-06-11 09:39:39.738190] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.295 [2024-06-11 09:39:39.738222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c03140 (9): Bad file descriptor 00:25:19.295 [2024-06-11 09:39:39.929196] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:19.295 [2024-06-11 09:39:44.188585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.188984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.188993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.296 [2024-06-11 09:39:44.189282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.296 [2024-06-11 09:39:44.189292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.297 [2024-06-11 09:39:44.189299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.297 [2024-06-11 09:39:44.189319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.297 [2024-06-11 09:39:44.189336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.297 [2024-06-11 09:39:44.189354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.297 [2024-06-11 09:39:44.189370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.297 [2024-06-11 09:39:44.189386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.297 [2024-06-11 09:39:44.189403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.297 [2024-06-11 09:39:44.189419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.297 [2024-06-11 09:39:44.189436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.297 [2024-06-11 09:39:44.189452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.297 [2024-06-11 09:39:44.189870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.297 [2024-06-11 09:39:44.189879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.189886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.189895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.189902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.189910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.189918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.189927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.189934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.189943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.189950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.189959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.189966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.189975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.189983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.189993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.298 [2024-06-11 09:39:44.190262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.298 [2024-06-11 09:39:44.190293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128048 len:8 PRP1 0x0 PRP2 0x0 00:25:19.298 [2024-06-11 09:39:44.190300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.298 [2024-06-11 09:39:44.190321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.298 [2024-06-11 09:39:44.190332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128056 len:8 PRP1 0x0 PRP2 0x0 00:25:19.298 [2024-06-11 09:39:44.190342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.298 [2024-06-11 09:39:44.190356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.298 [2024-06-11 09:39:44.190362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128064 len:8 PRP1 0x0 PRP2 0x0 00:25:19.298 [2024-06-11 09:39:44.190369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.298 [2024-06-11 09:39:44.190382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.298 [2024-06-11 09:39:44.190388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128072 len:8 PRP1 0x0 PRP2 0x0 00:25:19.298 [2024-06-11 09:39:44.190394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.298 [2024-06-11 09:39:44.190408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.298 [2024-06-11 09:39:44.190414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128080 len:8 PRP1 0x0 PRP2 0x0 00:25:19.298 [2024-06-11 09:39:44.190421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.298 [2024-06-11 09:39:44.190436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.298 [2024-06-11 09:39:44.190441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128088 len:8 PRP1 0x0 PRP2 0x0 00:25:19.298 [2024-06-11 09:39:44.190449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.298 [2024-06-11 09:39:44.190462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.298 [2024-06-11 09:39:44.190468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128096 len:8 PRP1 0x0 PRP2 0x0 00:25:19.298 [2024-06-11 09:39:44.190475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.298 [2024-06-11 09:39:44.190488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.298 [2024-06-11 09:39:44.190494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128104 len:8 PRP1 0x0 PRP2 0x0 00:25:19.298 [2024-06-11 09:39:44.190501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.298 [2024-06-11 09:39:44.190515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.298 [2024-06-11 09:39:44.190521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128112 len:8 PRP1 0x0 PRP2 0x0 00:25:19.298 [2024-06-11 09:39:44.190528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.298 [2024-06-11 09:39:44.190535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.298 [2024-06-11 09:39:44.190540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.298 [2024-06-11 09:39:44.190546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128120 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128128 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128136 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128144 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128152 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128160 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128168 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128176 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128184 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128192 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128200 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128208 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128216 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128224 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127608 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127616 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127624 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.190973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.190981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.190986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.190992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127632 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.201516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.201547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.201555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.201563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127640 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.201571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.201578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.201588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.201594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127648 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.201602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.201609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.299 [2024-06-11 09:39:44.201614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.299 [2024-06-11 09:39:44.201620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127656 len:8 PRP1 0x0 PRP2 0x0 00:25:19.299 [2024-06-11 09:39:44.201627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.201666] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dcc630 was disconnected and freed. reset controller. 00:25:19.299 [2024-06-11 09:39:44.201675] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:19.299 [2024-06-11 09:39:44.201701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.299 [2024-06-11 09:39:44.201710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.201720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.299 [2024-06-11 09:39:44.201726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.201737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.299 [2024-06-11 09:39:44.201744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.201752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.299 [2024-06-11 09:39:44.201758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.299 [2024-06-11 09:39:44.201766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.300 [2024-06-11 09:39:44.201802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c03140 (9): Bad file descriptor 00:25:19.300 [2024-06-11 09:39:44.205399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.300 [2024-06-11 09:39:44.415230] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:19.300 00:25:19.300 Latency(us) 00:25:19.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.300 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:19.300 Verification LBA range: start 0x0 length 0x4000 00:25:19.300 NVMe0n1 : 15.01 9204.90 35.96 1403.40 0.00 12038.73 802.13 22937.60 00:25:19.300 =================================================================================================================== 00:25:19.300 Total : 9204.90 35.96 1403.40 0.00 12038.73 802.13 22937.60 00:25:19.300 Received shutdown signal, test time was about 15.000000 seconds 00:25:19.300 00:25:19.300 Latency(us) 00:25:19.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.300 =================================================================================================================== 00:25:19.300 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.300 09:39:50 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:19.300 09:39:50 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:19.300 09:39:50 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:19.300 09:39:50 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1260008 00:25:19.300 09:39:50 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:19.300 09:39:50 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1260008 /var/tmp/bdevperf.sock 00:25:19.300 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 1260008 ']' 00:25:19.300 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:19.300 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:19.300 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:19.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:19.300 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:19.300 09:39:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:19.560 09:39:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:19.560 09:39:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:25:19.560 09:39:51 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:19.821 [2024-06-11 09:39:51.485809] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:19.821 09:39:51 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:20.082 [2024-06-11 09:39:51.694399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:20.082 09:39:51 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:20.343 NVMe0n1 00:25:20.343 09:39:52 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:20.603 00:25:20.863 09:39:52 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.123 00:25:21.123 09:39:52 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:21.123 09:39:52 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:21.383 09:39:53 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.643 09:39:53 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:24.942 09:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:24.942 09:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:24.942 09:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:24.942 09:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1261208 00:25:24.942 09:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1261208 00:25:25.882 0 00:25:25.882 09:39:57 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:25.882 [2024-06-11 09:39:50.429589] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:25:25.882 [2024-06-11 09:39:50.429647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260008 ] 00:25:25.882 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.882 [2024-06-11 09:39:50.506543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.882 [2024-06-11 09:39:50.570652] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.882 [2024-06-11 09:39:53.186047] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:25.882 [2024-06-11 09:39:53.186092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.882 [2024-06-11 09:39:53.186104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.882 [2024-06-11 09:39:53.186113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.882 [2024-06-11 09:39:53.186120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.882 [2024-06-11 09:39:53.186128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.882 [2024-06-11 09:39:53.186135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.882 [2024-06-11 09:39:53.186143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.882 [2024-06-11 09:39:53.186150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.882 [2024-06-11 09:39:53.186157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:25.882 [2024-06-11 09:39:53.186184] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:25.882 [2024-06-11 09:39:53.186198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cc140 (9): Bad file descriptor 00:25:25.882 [2024-06-11 09:39:53.194217] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:25.882 Running I/O for 1 seconds... 00:25:25.882 00:25:25.882 Latency(us) 00:25:25.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.882 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:25.882 Verification LBA range: start 0x0 length 0x4000 00:25:25.882 NVMe0n1 : 1.01 9113.38 35.60 0.00 0.00 13987.75 2785.28 12779.52 00:25:25.882 =================================================================================================================== 00:25:25.882 Total : 9113.38 35.60 0.00 0.00 13987.75 2785.28 12779.52 00:25:25.882 09:39:57 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:25.882 09:39:57 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:26.141 09:39:57 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:26.402 09:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.402 09:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:26.663 09:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:26.663 09:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:29.963 09:40:01 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:29.963 09:40:01 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:29.963 09:40:01 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1260008 00:25:29.963 09:40:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1260008 ']' 00:25:29.963 09:40:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1260008 00:25:29.963 09:40:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:29.963 09:40:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:29.963 09:40:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1260008 00:25:29.963 09:40:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:29.963 09:40:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:29.963 09:40:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1260008' 00:25:29.963 killing process with pid 1260008 00:25:29.963 09:40:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1260008 00:25:29.963 09:40:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1260008 00:25:30.224 09:40:01 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:30.224 09:40:01 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:30.484 rmmod nvme_tcp 00:25:30.484 rmmod nvme_fabrics 00:25:30.484 rmmod nvme_keyring 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1256482 ']' 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1256482 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 1256482 ']' 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 1256482 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1256482 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1256482' 00:25:30.484 killing process with pid 1256482 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 1256482 00:25:30.484 09:40:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 1256482 00:25:30.745 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:30.745 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:30.745 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:30.745 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:30.745 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:30.745 09:40:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.745 09:40:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.745 09:40:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.660 09:40:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:32.660 00:25:32.660 real 0m39.539s 00:25:32.660 user 2m4.288s 00:25:32.660 sys 0m7.784s 00:25:32.660 09:40:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:32.660 09:40:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:32.660 ************************************ 00:25:32.660 END TEST nvmf_failover 00:25:32.660 ************************************ 00:25:32.660 09:40:04 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:32.660 09:40:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:32.660 09:40:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:32.660 09:40:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:32.922 ************************************ 00:25:32.922 START TEST nvmf_host_discovery 00:25:32.922 ************************************ 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:32.922 * Looking for test storage... 00:25:32.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:32.922 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:32.923 09:40:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:41.076 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:41.076 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:41.076 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:41.076 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:41.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:25:41.076 00:25:41.076 --- 10.0.0.2 ping statistics --- 00:25:41.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.076 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:25:41.076 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:25:41.076 00:25:41.076 --- 10.0.0.1 ping statistics --- 00:25:41.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.076 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1266502 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1266502 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 1266502 ']' 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:41.077 09:40:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.077 [2024-06-11 09:40:11.918106] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:25:41.077 [2024-06-11 09:40:11.918173] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.077 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.077 [2024-06-11 09:40:11.988456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.077 [2024-06-11 09:40:12.061140] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.077 [2024-06-11 09:40:12.061174] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.077 [2024-06-11 09:40:12.061181] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.077 [2024-06-11 09:40:12.061187] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.077 [2024-06-11 09:40:12.061193] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.077 [2024-06-11 09:40:12.061217] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.077 [2024-06-11 09:40:12.820302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.077 [2024-06-11 09:40:12.832450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.077 null0 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.077 null1 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1266574 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1266574 /tmp/host.sock 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 1266574 ']' 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:41.077 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:41.077 09:40:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.399 [2024-06-11 09:40:12.920182] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:25:41.399 [2024-06-11 09:40:12.920227] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266574 ] 00:25:41.399 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.399 [2024-06-11 09:40:12.995241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.399 [2024-06-11 09:40:13.060928] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.970 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:41.970 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:25:41.970 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:41.970 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:41.970 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.970 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.970 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.970 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:41.970 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.232 09:40:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.232 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:42.232 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:42.232 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.232 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.232 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.232 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:42.232 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.232 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.232 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.232 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.232 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.232 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.232 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.494 [2024-06-11 09:40:14.135844] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.494 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:25:42.756 09:40:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:25:43.016 [2024-06-11 09:40:14.798738] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:43.016 [2024-06-11 09:40:14.798767] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:43.016 [2024-06-11 09:40:14.798785] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:43.276 [2024-06-11 09:40:14.886056] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:43.276 [2024-06-11 09:40:15.072961] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:43.276 [2024-06-11 09:40:15.072985] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:43.847 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:43.848 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:43.848 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:43.848 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:43.848 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:43.848 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:43.848 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:43.848 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:43.848 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:43.848 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:43.848 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:43.848 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.848 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:43.848 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.848 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.107 [2024-06-11 09:40:15.696076] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:44.107 [2024-06-11 09:40:15.696265] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:44.107 [2024-06-11 09:40:15.696290] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:44.107 [2024-06-11 09:40:15.782647] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:44.107 09:40:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:25:44.107 [2024-06-11 09:40:15.883431] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:44.107 [2024-06-11 09:40:15.883448] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:44.107 [2024-06-11 09:40:15.883453] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:45.046 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:45.046 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:45.046 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:45.046 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:45.046 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:45.046 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:45.046 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.046 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.046 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.307 [2024-06-11 09:40:16.959668] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:45.307 [2024-06-11 09:40:16.959693] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.307 [2024-06-11 09:40:16.964838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.307 [2024-06-11 09:40:16.964857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.307 [2024-06-11 09:40:16.964866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.307 [2024-06-11 09:40:16.964874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.307 [2024-06-11 09:40:16.964882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.307 [2024-06-11 09:40:16.964891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.307 [2024-06-11 09:40:16.964899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.307 [2024-06-11 09:40:16.964911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.307 [2024-06-11 09:40:16.964919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285740 is same with the state(5) to be set 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:45.307 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:45.308 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:45.308 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:45.308 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:45.308 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.308 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.308 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:45.308 09:40:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:45.308 [2024-06-11 09:40:16.974851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1285740 (9): Bad file descriptor 00:25:45.308 [2024-06-11 09:40:16.984890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.308 [2024-06-11 09:40:16.985096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.308 [2024-06-11 09:40:16.985110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1285740 with addr=10.0.0.2, port=4420 00:25:45.308 [2024-06-11 09:40:16.985119] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285740 is same with the state(5) to be set 00:25:45.308 [2024-06-11 09:40:16.985131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1285740 (9): Bad file descriptor 00:25:45.308 [2024-06-11 09:40:16.985142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.308 [2024-06-11 09:40:16.985149] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.308 [2024-06-11 09:40:16.985156] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.308 [2024-06-11 09:40:16.985168] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.308 09:40:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.308 [2024-06-11 09:40:16.994946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.308 [2024-06-11 09:40:16.995297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.308 [2024-06-11 09:40:16.995309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1285740 with addr=10.0.0.2, port=4420 00:25:45.308 [2024-06-11 09:40:16.995323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285740 is same with the state(5) to be set 00:25:45.308 [2024-06-11 09:40:16.995339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1285740 (9): Bad file descriptor 00:25:45.308 [2024-06-11 09:40:16.995350] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.308 [2024-06-11 09:40:16.995356] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.308 [2024-06-11 09:40:16.995363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.308 [2024-06-11 09:40:16.995376] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.308 [2024-06-11 09:40:17.004998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.308 [2024-06-11 09:40:17.005543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.308 [2024-06-11 09:40:17.005580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1285740 with addr=10.0.0.2, port=4420 00:25:45.308 [2024-06-11 09:40:17.005591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285740 is same with the state(5) to be set 00:25:45.308 [2024-06-11 09:40:17.005609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1285740 (9): Bad file descriptor 00:25:45.308 [2024-06-11 09:40:17.005621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.308 [2024-06-11 09:40:17.005628] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.308 [2024-06-11 09:40:17.005636] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.308 [2024-06-11 09:40:17.005651] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.308 [2024-06-11 09:40:17.015053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.308 [2024-06-11 09:40:17.015581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.308 [2024-06-11 09:40:17.015618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1285740 with addr=10.0.0.2, port=4420 00:25:45.308 [2024-06-11 09:40:17.015628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285740 is same with the state(5) to be set 00:25:45.308 [2024-06-11 09:40:17.015646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1285740 (9): Bad file descriptor 00:25:45.308 [2024-06-11 09:40:17.015658] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.308 [2024-06-11 09:40:17.015664] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.308 [2024-06-11 09:40:17.015672] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.308 [2024-06-11 09:40:17.015687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:45.308 [2024-06-11 09:40:17.025107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:45.308 [2024-06-11 09:40:17.025546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.308 [2024-06-11 09:40:17.025583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1285740 with addr=10.0.0.2, port=4420 00:25:45.308 [2024-06-11 09:40:17.025594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285740 is same with the state(5) to be set 00:25:45.308 [2024-06-11 09:40:17.025612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1285740 (9): Bad file descriptor 00:25:45.308 [2024-06-11 09:40:17.025629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.308 [2024-06-11 09:40:17.025636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.308 [2024-06-11 09:40:17.025644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.308 [2024-06-11 09:40:17.025658] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:45.308 [2024-06-11 09:40:17.035162] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.308 [2024-06-11 09:40:17.035591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.308 [2024-06-11 09:40:17.035606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1285740 with addr=10.0.0.2, port=4420 00:25:45.308 [2024-06-11 09:40:17.035614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285740 is same with the state(5) to be set 00:25:45.308 [2024-06-11 09:40:17.035625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1285740 (9): Bad file descriptor 00:25:45.308 [2024-06-11 09:40:17.035636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.308 [2024-06-11 09:40:17.035642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.308 [2024-06-11 09:40:17.035649] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.308 [2024-06-11 09:40:17.035660] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.308 [2024-06-11 09:40:17.045220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.308 [2024-06-11 09:40:17.045530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.308 [2024-06-11 09:40:17.045544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1285740 with addr=10.0.0.2, port=4420 00:25:45.308 [2024-06-11 09:40:17.045551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285740 is same with the state(5) to be set 00:25:45.308 [2024-06-11 09:40:17.045563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1285740 (9): Bad file descriptor 00:25:45.308 [2024-06-11 09:40:17.045573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.308 [2024-06-11 09:40:17.045580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.308 [2024-06-11 09:40:17.045587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.308 [2024-06-11 09:40:17.045597] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.308 [2024-06-11 09:40:17.055275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.308 [2024-06-11 09:40:17.055642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.308 [2024-06-11 09:40:17.055655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1285740 with addr=10.0.0.2, port=4420 00:25:45.308 [2024-06-11 09:40:17.055662] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285740 is same with the state(5) to be set 00:25:45.308 [2024-06-11 09:40:17.055673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1285740 (9): Bad file descriptor 00:25:45.308 [2024-06-11 09:40:17.055688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.308 [2024-06-11 09:40:17.055694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.308 [2024-06-11 09:40:17.055700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.308 [2024-06-11 09:40:17.055710] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.308 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.309 [2024-06-11 09:40:17.065329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.309 [2024-06-11 09:40:17.065705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.309 [2024-06-11 09:40:17.065716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1285740 with addr=10.0.0.2, port=4420 00:25:45.309 [2024-06-11 09:40:17.065723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285740 is same with the state(5) to be set 00:25:45.309 [2024-06-11 09:40:17.065734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1285740 (9): Bad file descriptor 00:25:45.309 [2024-06-11 09:40:17.065744] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.309 [2024-06-11 09:40:17.065749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.309 [2024-06-11 09:40:17.065756] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.309 [2024-06-11 09:40:17.065766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:45.309 [2024-06-11 09:40:17.075380] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:45.309 [2024-06-11 09:40:17.075750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.309 [2024-06-11 09:40:17.075762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1285740 with addr=10.0.0.2, port=4420 00:25:45.309 [2024-06-11 09:40:17.075769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285740 is same with the state(5) to be set 00:25:45.309 [2024-06-11 09:40:17.075780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1285740 (9): Bad file descriptor 00:25:45.309 [2024-06-11 09:40:17.075790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.309 [2024-06-11 09:40:17.075796] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.309 [2024-06-11 09:40:17.075803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.309 [2024-06-11 09:40:17.075813] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:45.309 [2024-06-11 09:40:17.085432] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:45.309 [2024-06-11 09:40:17.085795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.309 [2024-06-11 09:40:17.085808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1285740 with addr=10.0.0.2, port=4420 00:25:45.309 [2024-06-11 09:40:17.085815] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285740 is same with the state(5) to be set 00:25:45.309 [2024-06-11 09:40:17.085825] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1285740 (9): Bad file descriptor 00:25:45.309 [2024-06-11 09:40:17.085836] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:45.309 [2024-06-11 09:40:17.085842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:45.309 [2024-06-11 09:40:17.085848] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:45.309 [2024-06-11 09:40:17.085858] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:45.309 [2024-06-11 09:40:17.087083] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:45.309 [2024-06-11 09:40:17.087100] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:45.309 09:40:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:46.691 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:46.692 09:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.073 [2024-06-11 09:40:19.452495] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:48.073 [2024-06-11 09:40:19.452513] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:48.073 [2024-06-11 09:40:19.452526] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:48.073 [2024-06-11 09:40:19.581939] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:48.333 [2024-06-11 09:40:19.893723] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:48.333 [2024-06-11 09:40:19.893753] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.333 request: 00:25:48.333 { 00:25:48.333 "name": "nvme", 00:25:48.333 "trtype": "tcp", 00:25:48.333 "traddr": "10.0.0.2", 00:25:48.333 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:48.333 "adrfam": "ipv4", 00:25:48.333 "trsvcid": "8009", 00:25:48.333 "wait_for_attach": true, 00:25:48.333 "method": "bdev_nvme_start_discovery", 00:25:48.333 "req_id": 1 00:25:48.333 } 00:25:48.333 Got JSON-RPC error response 00:25:48.333 response: 00:25:48.333 { 00:25:48.333 "code": -17, 00:25:48.333 "message": "File exists" 00:25:48.333 } 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:48.333 09:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:48.334 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:48.334 09:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:48.334 09:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:48.334 09:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:48.334 09:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.334 09:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:48.334 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.334 09:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.334 09:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.334 request: 00:25:48.334 { 00:25:48.334 "name": "nvme_second", 00:25:48.334 "trtype": "tcp", 00:25:48.334 "traddr": "10.0.0.2", 00:25:48.334 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:48.334 "adrfam": "ipv4", 00:25:48.334 "trsvcid": "8009", 00:25:48.334 "wait_for_attach": true, 00:25:48.334 "method": "bdev_nvme_start_discovery", 00:25:48.334 "req_id": 1 00:25:48.334 } 00:25:48.334 Got JSON-RPC error response 00:25:48.334 response: 00:25:48.334 { 00:25:48.334 "code": -17, 00:25:48.334 "message": "File exists" 00:25:48.334 } 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.334 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:48.594 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:48.594 09:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:48.594 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:48.594 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:48.594 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:48.594 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:48.594 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:48.594 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:48.595 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:48.595 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.595 09:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.534 [2024-06-11 09:40:21.161607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.534 [2024-06-11 09:40:21.161640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12817b0 with addr=10.0.0.2, port=8010 00:25:49.534 [2024-06-11 09:40:21.161655] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:49.534 [2024-06-11 09:40:21.161662] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:49.534 [2024-06-11 09:40:21.161669] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:50.476 [2024-06-11 09:40:22.163935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.476 [2024-06-11 09:40:22.163959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12817b0 with addr=10.0.0.2, port=8010 00:25:50.476 [2024-06-11 09:40:22.163971] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:50.476 [2024-06-11 09:40:22.163978] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:50.476 [2024-06-11 09:40:22.163984] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:51.418 [2024-06-11 09:40:23.165877] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:51.418 request: 00:25:51.418 { 00:25:51.418 "name": "nvme_second", 00:25:51.418 "trtype": "tcp", 00:25:51.418 "traddr": "10.0.0.2", 00:25:51.418 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:51.418 "adrfam": "ipv4", 00:25:51.418 "trsvcid": "8010", 00:25:51.418 "attach_timeout_ms": 3000, 00:25:51.418 "method": "bdev_nvme_start_discovery", 00:25:51.418 "req_id": 1 00:25:51.418 } 00:25:51.418 Got JSON-RPC error response 00:25:51.418 response: 00:25:51.418 { 00:25:51.418 "code": -110, 00:25:51.418 "message": "Connection timed out" 00:25:51.418 } 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1266574 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:51.418 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:51.679 rmmod nvme_tcp 00:25:51.679 rmmod nvme_fabrics 00:25:51.679 rmmod nvme_keyring 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1266502 ']' 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1266502 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 1266502 ']' 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 1266502 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1266502 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1266502' 00:25:51.679 killing process with pid 1266502 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 1266502 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 1266502 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.679 09:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:54.226 00:25:54.226 real 0m21.064s 00:25:54.226 user 0m25.913s 00:25:54.226 sys 0m6.808s 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.226 ************************************ 00:25:54.226 END TEST nvmf_host_discovery 00:25:54.226 ************************************ 00:25:54.226 09:40:25 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:54.226 09:40:25 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:54.226 09:40:25 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:54.226 09:40:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:54.226 ************************************ 00:25:54.226 START TEST nvmf_host_multipath_status 00:25:54.226 ************************************ 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:54.226 * Looking for test storage... 00:25:54.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.226 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:54.227 09:40:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:00.816 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.816 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:00.816 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:00.816 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:00.816 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:00.816 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:00.816 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:00.817 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:00.817 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:00.817 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:00.817 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:00.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:26:00.817 00:26:00.817 --- 10.0.0.2 ping statistics --- 00:26:00.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.817 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:26:00.817 00:26:00.817 --- 10.0.0.1 ping statistics --- 00:26:00.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.817 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:00.817 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:01.081 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1272782 00:26:01.081 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1272782 00:26:01.081 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:01.081 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 1272782 ']' 00:26:01.081 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.081 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:01.081 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.081 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:01.081 09:40:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:01.081 [2024-06-11 09:40:32.688691] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:26:01.081 [2024-06-11 09:40:32.688752] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.081 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.081 [2024-06-11 09:40:32.776642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:01.081 [2024-06-11 09:40:32.873521] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.081 [2024-06-11 09:40:32.873574] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.081 [2024-06-11 09:40:32.873582] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.081 [2024-06-11 09:40:32.873590] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.082 [2024-06-11 09:40:32.873597] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.082 [2024-06-11 09:40:32.873734] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.082 [2024-06-11 09:40:32.873739] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.060 09:40:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:02.060 09:40:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:26:02.060 09:40:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:02.060 09:40:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:02.060 09:40:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:02.060 09:40:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.060 09:40:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1272782 00:26:02.060 09:40:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:02.060 [2024-06-11 09:40:33.781687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.060 09:40:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:02.321 Malloc0 00:26:02.321 09:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:02.583 09:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:02.844 09:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:02.844 [2024-06-11 09:40:34.599569] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:02.844 09:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:03.105 [2024-06-11 09:40:34.800094] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:03.105 09:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1273332 00:26:03.105 09:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:03.105 09:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:03.105 09:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1273332 /var/tmp/bdevperf.sock 00:26:03.105 09:40:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 1273332 ']' 00:26:03.105 09:40:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:03.105 09:40:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:03.105 09:40:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:03.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:03.105 09:40:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:03.105 09:40:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:03.365 09:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:03.365 09:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:26:03.365 09:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:03.625 09:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:04.195 Nvme0n1 00:26:04.195 09:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:04.456 Nvme0n1 00:26:04.456 09:40:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:04.456 09:40:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:07.002 09:40:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:07.002 09:40:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:07.002 09:40:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:07.002 09:40:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:07.945 09:40:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:07.945 09:40:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:07.945 09:40:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.945 09:40:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:08.205 09:40:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.206 09:40:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:08.206 09:40:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.206 09:40:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:08.467 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.467 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:08.467 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.467 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:08.728 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.728 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.728 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.728 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.989 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.989 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:08.989 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.989 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.989 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.989 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:08.989 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:08.989 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.250 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.250 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:09.250 09:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:09.511 09:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:09.771 09:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:10.714 09:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:10.714 09:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:10.714 09:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.714 09:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.975 09:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.975 09:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:10.975 09:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.975 09:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:11.235 09:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.235 09:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:11.235 09:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.235 09:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.496 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.496 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.496 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.496 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.757 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.757 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.757 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.757 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.757 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.757 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.757 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.757 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:12.018 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.018 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:12.018 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:12.279 09:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:12.539 09:40:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:13.481 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:13.481 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:13.481 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.481 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:13.742 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.742 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:13.742 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.742 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:14.003 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.003 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:14.003 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.003 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:14.003 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.003 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:14.003 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.003 09:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:14.265 09:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.265 09:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:14.265 09:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.265 09:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:14.526 09:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.526 09:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:14.526 09:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.526 09:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.787 09:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.787 09:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:14.787 09:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:15.048 09:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:15.309 09:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:16.253 09:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:16.253 09:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:16.253 09:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.253 09:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:16.515 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.515 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:16.515 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.515 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:16.775 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.775 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:16.775 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.775 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:16.775 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.775 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:16.775 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.775 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:17.041 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.041 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:17.041 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.041 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:17.367 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.367 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:17.367 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.367 09:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:17.628 09:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.628 09:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:17.628 09:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:17.628 09:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:17.889 09:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:18.833 09:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:18.833 09:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:18.833 09:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.833 09:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:19.094 09:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.094 09:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:19.094 09:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.094 09:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:19.354 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.354 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:19.354 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.354 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:19.615 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.615 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:19.615 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.615 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:19.875 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.875 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:19.875 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.875 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.135 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.135 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:20.135 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.136 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.136 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.136 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:20.136 09:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:20.395 09:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:20.655 09:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:21.596 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:21.596 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:21.596 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.596 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:21.856 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:21.856 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:21.856 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.856 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.117 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.117 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.117 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.117 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.378 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.378 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.378 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.378 09:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.639 09:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.639 09:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:22.639 09:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.639 09:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:22.639 09:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.639 09:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:22.639 09:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.639 09:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:22.900 09:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.900 09:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:23.160 09:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:23.160 09:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:23.420 09:40:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:23.681 09:40:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:24.621 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:24.621 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:24.621 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.621 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:24.881 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.881 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:24.881 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:24.881 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.141 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.141 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.141 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.141 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.401 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.401 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.401 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.401 09:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.401 09:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.402 09:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.402 09:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.402 09:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.662 09:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.662 09:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.662 09:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.662 09:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.924 09:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.924 09:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:25.924 09:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:26.185 09:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:26.445 09:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:27.386 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:27.386 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:27.386 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.386 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.647 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.647 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:27.647 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.647 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.907 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.907 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.907 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.907 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:28.168 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.168 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:28.168 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.168 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:28.168 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.168 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:28.168 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.168 09:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.428 09:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.428 09:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:28.428 09:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.428 09:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.689 09:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.689 09:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:28.689 09:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:28.949 09:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:29.210 09:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:30.152 09:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:30.152 09:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:30.152 09:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.152 09:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:30.423 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.423 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:30.423 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:30.423 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.690 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.690 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:30.690 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.690 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:30.690 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.690 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:30.690 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.690 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:30.951 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.951 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:30.951 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.951 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:31.212 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.212 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:31.212 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.212 09:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:31.473 09:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.473 09:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:31.473 09:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:31.734 09:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:31.734 09:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:33.152 09:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:33.152 09:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:33.152 09:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.152 09:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:33.152 09:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.152 09:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:33.152 09:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.152 09:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:33.413 09:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.413 09:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:33.413 09:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.413 09:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:33.413 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.413 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:33.413 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.413 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:33.673 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.673 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:33.673 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.673 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:33.934 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.934 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:33.934 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.934 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:34.195 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:34.195 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1273332 00:26:34.195 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 1273332 ']' 00:26:34.195 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 1273332 00:26:34.195 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:26:34.195 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:34.195 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1273332 00:26:34.195 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:26:34.195 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:26:34.195 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1273332' 00:26:34.195 killing process with pid 1273332 00:26:34.195 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 1273332 00:26:34.195 09:41:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 1273332 00:26:34.195 Connection closed with partial response: 00:26:34.195 00:26:34.195 00:26:34.460 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1273332 00:26:34.460 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:34.460 [2024-06-11 09:40:34.862838] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:26:34.460 [2024-06-11 09:40:34.862895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273332 ] 00:26:34.460 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.460 [2024-06-11 09:40:34.913017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.460 [2024-06-11 09:40:34.965145] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:34.460 Running I/O for 90 seconds... 00:26:34.460 [2024-06-11 09:40:49.383992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.460 [2024-06-11 09:40:49.384023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:34.460 [2024-06-11 09:40:49.384055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.460 [2024-06-11 09:40:49.384061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:34.460 [2024-06-11 09:40:49.384072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.460 [2024-06-11 09:40:49.384077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:34.460 [2024-06-11 09:40:49.384088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.460 [2024-06-11 09:40:49.384092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:34.460 [2024-06-11 09:40:49.384103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.460 [2024-06-11 09:40:49.384108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:34.460 [2024-06-11 09:40:49.384118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.460 [2024-06-11 09:40:49.384123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:34.460 [2024-06-11 09:40:49.384133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.460 [2024-06-11 09:40:49.384138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:34.460 [2024-06-11 09:40:49.384148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.460 [2024-06-11 09:40:49.384153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:34.460 [2024-06-11 09:40:49.384163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.460 [2024-06-11 09:40:49.384168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:34.460 [2024-06-11 09:40:49.384178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.460 [2024-06-11 09:40:49.384183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:34.460 [2024-06-11 09:40:49.384193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.460 [2024-06-11 09:40:49.384203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:34.460 [2024-06-11 09:40:49.384213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.460 [2024-06-11 09:40:49.384219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:34.460 [2024-06-11 09:40:49.384229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.384988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.384999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.385004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.385016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.385022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.385033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.385038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.385049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.385054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.385065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.461 [2024-06-11 09:40:49.385070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:34.461 [2024-06-11 09:40:49.385081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.462 [2024-06-11 09:40:49.385919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.462 [2024-06-11 09:40:49.385938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.462 [2024-06-11 09:40:49.385956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.462 [2024-06-11 09:40:49.385973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.385986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.462 [2024-06-11 09:40:49.385991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.386004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.462 [2024-06-11 09:40:49.386009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.386022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.462 [2024-06-11 09:40:49.386028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:34.462 [2024-06-11 09:40:49.386073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.462 [2024-06-11 09:40:49.386079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.386093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.463 [2024-06-11 09:40:49.386098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.386112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.463 [2024-06-11 09:40:49.386117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.386131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.463 [2024-06-11 09:40:49.386136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.386150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.463 [2024-06-11 09:40:49.386155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.386169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.463 [2024-06-11 09:40:49.386178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.386192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.463 [2024-06-11 09:40:49.386197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.386211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.463 [2024-06-11 09:40:49.386216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.386231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.463 [2024-06-11 09:40:49.386236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.386251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.386256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.386270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.386274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.386288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.386293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.386308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.386313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:34.463 [2024-06-11 09:40:49.387543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.463 [2024-06-11 09:40:49.387547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.387987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.387992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.388008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.388012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.388028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.388033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.388048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.388053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.388069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.388074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.388089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.388094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.388110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.464 [2024-06-11 09:40:49.388114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:34.464 [2024-06-11 09:40:49.388130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:40:49.388137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:40:49.388153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:40:49.388157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:40:49.388173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:40:49.388178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:40:49.388194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:40:49.388199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:40:49.388215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:40:49.388219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:40:49.388235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:40:49.388240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:40:49.388256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:40:49.388261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:40:49.388277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:40:49.388282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.509983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.509988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.510000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.510005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:34.465 [2024-06-11 09:41:03.510015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.465 [2024-06-11 09:41:03.510020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.466 [2024-06-11 09:41:03.510170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:34.466 [2024-06-11 09:41:03.510324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.466 [2024-06-11 09:41:03.510329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.510339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.510344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.510354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.510359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.467 [2024-06-11 09:41:03.511389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:34.467 [2024-06-11 09:41:03.511399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.467 [2024-06-11 09:41:03.511404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:34.467 Received shutdown signal, test time was about 29.505158 seconds 00:26:34.467 00:26:34.467 Latency(us) 00:26:34.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.467 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:34.467 Verification LBA range: start 0x0 length 0x4000 00:26:34.467 Nvme0n1 : 29.50 9727.35 38.00 0.00 0.00 13140.04 539.31 3019898.88 00:26:34.467 =================================================================================================================== 00:26:34.467 Total : 9727.35 38.00 0.00 0.00 13140.04 539.31 3019898.88 00:26:34.467 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:34.467 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:34.467 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:34.467 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:34.467 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:34.467 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:34.468 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:34.468 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:34.468 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:34.468 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:34.468 rmmod nvme_tcp 00:26:34.729 rmmod nvme_fabrics 00:26:34.729 rmmod nvme_keyring 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1272782 ']' 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1272782 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 1272782 ']' 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 1272782 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1272782 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1272782' 00:26:34.729 killing process with pid 1272782 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 1272782 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 1272782 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.729 09:41:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.278 09:41:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:37.278 00:26:37.278 real 0m42.951s 00:26:37.278 user 1m56.448s 00:26:37.278 sys 0m11.151s 00:26:37.278 09:41:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:37.278 09:41:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:37.278 ************************************ 00:26:37.278 END TEST nvmf_host_multipath_status 00:26:37.278 ************************************ 00:26:37.278 09:41:08 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:37.278 09:41:08 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:37.278 09:41:08 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:37.278 09:41:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.278 ************************************ 00:26:37.278 START TEST nvmf_discovery_remove_ifc 00:26:37.278 ************************************ 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:37.278 * Looking for test storage... 00:26:37.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.278 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:37.279 09:41:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:43.872 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:43.872 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:43.872 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:43.872 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:43.872 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:43.873 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.135 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.135 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.135 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:44.135 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.135 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.135 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.135 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:44.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:26:44.396 00:26:44.396 --- 10.0.0.2 ping statistics --- 00:26:44.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.396 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:26:44.396 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:26:44.396 00:26:44.396 --- 10.0.0.1 ping statistics --- 00:26:44.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.396 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:26:44.396 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.396 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:44.397 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:44.397 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.397 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:44.397 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:44.397 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.397 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:44.397 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:44.397 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:44.397 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:44.397 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:44.397 09:41:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.397 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1283657 00:26:44.397 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1283657 00:26:44.397 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:44.397 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 1283657 ']' 00:26:44.397 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.397 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:44.397 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.397 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:44.397 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.397 [2024-06-11 09:41:16.053646] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:26:44.397 [2024-06-11 09:41:16.053697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.397 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.397 [2024-06-11 09:41:16.120694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.397 [2024-06-11 09:41:16.186198] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.397 [2024-06-11 09:41:16.186232] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.397 [2024-06-11 09:41:16.186239] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.397 [2024-06-11 09:41:16.186245] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.397 [2024-06-11 09:41:16.186251] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.397 [2024-06-11 09:41:16.186271] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.658 [2024-06-11 09:41:16.327065] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.658 [2024-06-11 09:41:16.335228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:44.658 null0 00:26:44.658 [2024-06-11 09:41:16.367238] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1283676 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1283676 /tmp/host.sock 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 1283676 ']' 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:44.658 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:44.658 09:41:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.658 [2024-06-11 09:41:16.439624] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:26:44.658 [2024-06-11 09:41:16.439672] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283676 ] 00:26:44.658 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.920 [2024-06-11 09:41:16.514180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.920 [2024-06-11 09:41:16.578697] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.492 09:41:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:45.492 09:41:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:26:45.492 09:41:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:45.492 09:41:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:45.492 09:41:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:45.492 09:41:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.492 09:41:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:45.492 09:41:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:45.492 09:41:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:45.492 09:41:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.492 09:41:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:45.492 09:41:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:45.492 09:41:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:45.492 09:41:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.877 [2024-06-11 09:41:18.297481] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:46.877 [2024-06-11 09:41:18.297502] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:46.877 [2024-06-11 09:41:18.297515] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:46.877 [2024-06-11 09:41:18.425930] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:46.877 [2024-06-11 09:41:18.526499] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:46.877 [2024-06-11 09:41:18.526550] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:46.877 [2024-06-11 09:41:18.526571] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:46.877 [2024-06-11 09:41:18.526586] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:46.877 [2024-06-11 09:41:18.526605] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.877 [2024-06-11 09:41:18.535231] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x241cea0 was disconnected and freed. delete nvme_qpair. 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.877 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:47.138 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:47.138 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:47.138 09:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:48.079 09:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:48.079 09:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.079 09:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:48.079 09:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.079 09:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:48.079 09:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:48.079 09:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:48.079 09:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:48.079 09:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:48.079 09:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:49.020 09:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.020 09:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.020 09:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.020 09:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:49.020 09:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.020 09:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.021 09:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.021 09:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:49.021 09:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:49.021 09:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:50.404 09:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:50.404 09:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:50.404 09:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.404 09:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:50.404 09:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.404 09:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:50.404 09:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.404 09:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.404 09:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:50.404 09:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:51.346 09:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.346 09:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.346 09:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.346 09:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:51.346 09:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.346 09:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.346 09:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.346 09:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.346 09:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:51.346 09:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:52.291 09:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.291 09:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.291 09:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.291 09:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:52.291 09:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.291 09:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.291 09:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.291 [2024-06-11 09:41:23.966988] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:52.291 [2024-06-11 09:41:23.967031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.291 [2024-06-11 09:41:23.967042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.291 [2024-06-11 09:41:23.967052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.291 [2024-06-11 09:41:23.967059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.291 [2024-06-11 09:41:23.967067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.291 [2024-06-11 09:41:23.967074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.291 [2024-06-11 09:41:23.967086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.291 [2024-06-11 09:41:23.967093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.291 [2024-06-11 09:41:23.967101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.291 [2024-06-11 09:41:23.967108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.291 [2024-06-11 09:41:23.967115] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e4220 is same with the state(5) to be set 00:26:52.291 [2024-06-11 09:41:23.977014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e4220 (9): Bad file descriptor 00:26:52.291 09:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:52.291 [2024-06-11 09:41:23.987060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.291 09:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:52.291 09:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:53.263 [2024-06-11 09:41:25.008402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:53.263 [2024-06-11 09:41:25.008495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e4220 with addr=10.0.0.2, port=4420 00:26:53.263 [2024-06-11 09:41:25.008527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e4220 is same with the state(5) to be set 00:26:53.263 [2024-06-11 09:41:25.008585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e4220 (9): Bad file descriptor 00:26:53.263 [2024-06-11 09:41:25.008694] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:53.263 [2024-06-11 09:41:25.008735] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:53.263 [2024-06-11 09:41:25.008756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:53.263 [2024-06-11 09:41:25.008778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:53.263 [2024-06-11 09:41:25.008821] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.263 [2024-06-11 09:41:25.008844] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:53.263 09:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:53.263 09:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.263 09:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:53.263 09:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:53.263 09:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:53.263 09:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:53.263 09:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:53.263 09:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:53.263 09:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:53.263 09:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:54.206 [2024-06-11 09:41:26.011255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.206 [2024-06-11 09:41:26.011290] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:54.206 [2024-06-11 09:41:26.011318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.206 [2024-06-11 09:41:26.011333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.206 [2024-06-11 09:41:26.011344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.206 [2024-06-11 09:41:26.011351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.206 [2024-06-11 09:41:26.011358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.206 [2024-06-11 09:41:26.011365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.206 [2024-06-11 09:41:26.011374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.206 [2024-06-11 09:41:26.011380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.206 [2024-06-11 09:41:26.011388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.206 [2024-06-11 09:41:26.011396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.206 [2024-06-11 09:41:26.011403] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:54.206 [2024-06-11 09:41:26.011807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e36b0 (9): Bad file descriptor 00:26:54.206 [2024-06-11 09:41:26.012817] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:54.206 [2024-06-11 09:41:26.012827] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:54.467 09:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:55.853 09:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:55.853 09:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.853 09:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:55.853 09:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.853 09:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:55.853 09:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.853 09:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:55.853 09:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.853 09:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:55.853 09:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:56.425 [2024-06-11 09:41:28.072507] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:56.425 [2024-06-11 09:41:28.072524] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:56.425 [2024-06-11 09:41:28.072538] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:56.425 [2024-06-11 09:41:28.199925] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:56.687 [2024-06-11 09:41:28.300634] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:56.687 [2024-06-11 09:41:28.300672] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:56.687 [2024-06-11 09:41:28.300692] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:56.687 [2024-06-11 09:41:28.300706] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:56.687 [2024-06-11 09:41:28.300713] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:56.687 [2024-06-11 09:41:28.309072] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x23f3cc0 was disconnected and freed. delete nvme_qpair. 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1283676 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 1283676 ']' 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 1283676 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1283676 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1283676' 00:26:56.687 killing process with pid 1283676 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 1283676 00:26:56.687 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 1283676 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:56.949 rmmod nvme_tcp 00:26:56.949 rmmod nvme_fabrics 00:26:56.949 rmmod nvme_keyring 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1283657 ']' 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1283657 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 1283657 ']' 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 1283657 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1283657 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1283657' 00:26:56.949 killing process with pid 1283657 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 1283657 00:26:56.949 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 1283657 00:26:57.210 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:57.210 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:57.210 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:57.210 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:57.210 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:57.210 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.210 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:57.210 09:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.125 09:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:59.125 00:26:59.125 real 0m22.228s 00:26:59.125 user 0m26.410s 00:26:59.125 sys 0m6.505s 00:26:59.125 09:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:59.125 09:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.125 ************************************ 00:26:59.125 END TEST nvmf_discovery_remove_ifc 00:26:59.125 ************************************ 00:26:59.387 09:41:30 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:59.387 09:41:30 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:59.387 09:41:30 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:59.387 09:41:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:59.387 ************************************ 00:26:59.387 START TEST nvmf_identify_kernel_target 00:26:59.387 ************************************ 00:26:59.387 09:41:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:59.387 * Looking for test storage... 00:26:59.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:59.388 09:41:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:07.531 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:07.531 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:07.531 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:07.532 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:07.532 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:07.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:27:07.532 00:27:07.532 --- 10.0.0.2 ping statistics --- 00:27:07.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.532 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:27:07.532 00:27:07.532 --- 10.0.0.1 ping statistics --- 00:27:07.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.532 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:07.532 09:41:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:10.080 Waiting for block devices as requested 00:27:10.080 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:10.080 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:10.342 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:10.342 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:10.342 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:10.602 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:10.602 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:10.602 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:10.863 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:10.863 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:11.124 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:11.124 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:11.124 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:11.384 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:11.384 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:11.384 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:11.384 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:11.646 No valid GPT data, bailing 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:11.646 00:27:11.646 Discovery Log Number of Records 2, Generation counter 2 00:27:11.646 =====Discovery Log Entry 0====== 00:27:11.646 trtype: tcp 00:27:11.646 adrfam: ipv4 00:27:11.646 subtype: current discovery subsystem 00:27:11.646 treq: not specified, sq flow control disable supported 00:27:11.646 portid: 1 00:27:11.646 trsvcid: 4420 00:27:11.646 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:11.646 traddr: 10.0.0.1 00:27:11.646 eflags: none 00:27:11.646 sectype: none 00:27:11.646 =====Discovery Log Entry 1====== 00:27:11.646 trtype: tcp 00:27:11.646 adrfam: ipv4 00:27:11.646 subtype: nvme subsystem 00:27:11.646 treq: not specified, sq flow control disable supported 00:27:11.646 portid: 1 00:27:11.646 trsvcid: 4420 00:27:11.646 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:11.646 traddr: 10.0.0.1 00:27:11.646 eflags: none 00:27:11.646 sectype: none 00:27:11.646 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:11.646 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:11.646 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.646 ===================================================== 00:27:11.646 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:11.646 ===================================================== 00:27:11.646 Controller Capabilities/Features 00:27:11.646 ================================ 00:27:11.646 Vendor ID: 0000 00:27:11.646 Subsystem Vendor ID: 0000 00:27:11.646 Serial Number: f3d4e60f946452f4a18f 00:27:11.646 Model Number: Linux 00:27:11.646 Firmware Version: 6.7.0-68 00:27:11.646 Recommended Arb Burst: 0 00:27:11.646 IEEE OUI Identifier: 00 00 00 00:27:11.646 Multi-path I/O 00:27:11.646 May have multiple subsystem ports: No 00:27:11.646 May have multiple controllers: No 00:27:11.646 Associated with SR-IOV VF: No 00:27:11.646 Max Data Transfer Size: Unlimited 00:27:11.646 Max Number of Namespaces: 0 00:27:11.646 Max Number of I/O Queues: 1024 00:27:11.646 NVMe Specification Version (VS): 1.3 00:27:11.646 NVMe Specification Version (Identify): 1.3 00:27:11.646 Maximum Queue Entries: 1024 00:27:11.646 Contiguous Queues Required: No 00:27:11.646 Arbitration Mechanisms Supported 00:27:11.646 Weighted Round Robin: Not Supported 00:27:11.646 Vendor Specific: Not Supported 00:27:11.646 Reset Timeout: 7500 ms 00:27:11.646 Doorbell Stride: 4 bytes 00:27:11.646 NVM Subsystem Reset: Not Supported 00:27:11.646 Command Sets Supported 00:27:11.646 NVM Command Set: Supported 00:27:11.646 Boot Partition: Not Supported 00:27:11.646 Memory Page Size Minimum: 4096 bytes 00:27:11.646 Memory Page Size Maximum: 4096 bytes 00:27:11.646 Persistent Memory Region: Not Supported 00:27:11.646 Optional Asynchronous Events Supported 00:27:11.646 Namespace Attribute Notices: Not Supported 00:27:11.646 Firmware Activation Notices: Not Supported 00:27:11.646 ANA Change Notices: Not Supported 00:27:11.646 PLE Aggregate Log Change Notices: Not Supported 00:27:11.646 LBA Status Info Alert Notices: Not Supported 00:27:11.646 EGE Aggregate Log Change Notices: Not Supported 00:27:11.646 Normal NVM Subsystem Shutdown event: Not Supported 00:27:11.646 Zone Descriptor Change Notices: Not Supported 00:27:11.646 Discovery Log Change Notices: Supported 00:27:11.646 Controller Attributes 00:27:11.646 128-bit Host Identifier: Not Supported 00:27:11.646 Non-Operational Permissive Mode: Not Supported 00:27:11.646 NVM Sets: Not Supported 00:27:11.646 Read Recovery Levels: Not Supported 00:27:11.646 Endurance Groups: Not Supported 00:27:11.646 Predictable Latency Mode: Not Supported 00:27:11.646 Traffic Based Keep ALive: Not Supported 00:27:11.646 Namespace Granularity: Not Supported 00:27:11.646 SQ Associations: Not Supported 00:27:11.646 UUID List: Not Supported 00:27:11.646 Multi-Domain Subsystem: Not Supported 00:27:11.646 Fixed Capacity Management: Not Supported 00:27:11.646 Variable Capacity Management: Not Supported 00:27:11.646 Delete Endurance Group: Not Supported 00:27:11.646 Delete NVM Set: Not Supported 00:27:11.646 Extended LBA Formats Supported: Not Supported 00:27:11.646 Flexible Data Placement Supported: Not Supported 00:27:11.646 00:27:11.646 Controller Memory Buffer Support 00:27:11.646 ================================ 00:27:11.646 Supported: No 00:27:11.646 00:27:11.646 Persistent Memory Region Support 00:27:11.646 ================================ 00:27:11.646 Supported: No 00:27:11.646 00:27:11.646 Admin Command Set Attributes 00:27:11.646 ============================ 00:27:11.646 Security Send/Receive: Not Supported 00:27:11.646 Format NVM: Not Supported 00:27:11.646 Firmware Activate/Download: Not Supported 00:27:11.646 Namespace Management: Not Supported 00:27:11.646 Device Self-Test: Not Supported 00:27:11.646 Directives: Not Supported 00:27:11.646 NVMe-MI: Not Supported 00:27:11.646 Virtualization Management: Not Supported 00:27:11.646 Doorbell Buffer Config: Not Supported 00:27:11.646 Get LBA Status Capability: Not Supported 00:27:11.646 Command & Feature Lockdown Capability: Not Supported 00:27:11.646 Abort Command Limit: 1 00:27:11.646 Async Event Request Limit: 1 00:27:11.646 Number of Firmware Slots: N/A 00:27:11.646 Firmware Slot 1 Read-Only: N/A 00:27:11.646 Firmware Activation Without Reset: N/A 00:27:11.646 Multiple Update Detection Support: N/A 00:27:11.646 Firmware Update Granularity: No Information Provided 00:27:11.646 Per-Namespace SMART Log: No 00:27:11.646 Asymmetric Namespace Access Log Page: Not Supported 00:27:11.646 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:11.646 Command Effects Log Page: Not Supported 00:27:11.646 Get Log Page Extended Data: Supported 00:27:11.646 Telemetry Log Pages: Not Supported 00:27:11.646 Persistent Event Log Pages: Not Supported 00:27:11.646 Supported Log Pages Log Page: May Support 00:27:11.646 Commands Supported & Effects Log Page: Not Supported 00:27:11.647 Feature Identifiers & Effects Log Page:May Support 00:27:11.647 NVMe-MI Commands & Effects Log Page: May Support 00:27:11.647 Data Area 4 for Telemetry Log: Not Supported 00:27:11.647 Error Log Page Entries Supported: 1 00:27:11.647 Keep Alive: Not Supported 00:27:11.647 00:27:11.647 NVM Command Set Attributes 00:27:11.647 ========================== 00:27:11.647 Submission Queue Entry Size 00:27:11.647 Max: 1 00:27:11.647 Min: 1 00:27:11.647 Completion Queue Entry Size 00:27:11.647 Max: 1 00:27:11.647 Min: 1 00:27:11.647 Number of Namespaces: 0 00:27:11.647 Compare Command: Not Supported 00:27:11.647 Write Uncorrectable Command: Not Supported 00:27:11.647 Dataset Management Command: Not Supported 00:27:11.647 Write Zeroes Command: Not Supported 00:27:11.647 Set Features Save Field: Not Supported 00:27:11.647 Reservations: Not Supported 00:27:11.647 Timestamp: Not Supported 00:27:11.647 Copy: Not Supported 00:27:11.647 Volatile Write Cache: Not Present 00:27:11.647 Atomic Write Unit (Normal): 1 00:27:11.647 Atomic Write Unit (PFail): 1 00:27:11.647 Atomic Compare & Write Unit: 1 00:27:11.647 Fused Compare & Write: Not Supported 00:27:11.647 Scatter-Gather List 00:27:11.647 SGL Command Set: Supported 00:27:11.647 SGL Keyed: Not Supported 00:27:11.647 SGL Bit Bucket Descriptor: Not Supported 00:27:11.647 SGL Metadata Pointer: Not Supported 00:27:11.647 Oversized SGL: Not Supported 00:27:11.647 SGL Metadata Address: Not Supported 00:27:11.647 SGL Offset: Supported 00:27:11.647 Transport SGL Data Block: Not Supported 00:27:11.647 Replay Protected Memory Block: Not Supported 00:27:11.647 00:27:11.647 Firmware Slot Information 00:27:11.647 ========================= 00:27:11.647 Active slot: 0 00:27:11.647 00:27:11.647 00:27:11.647 Error Log 00:27:11.647 ========= 00:27:11.647 00:27:11.647 Active Namespaces 00:27:11.647 ================= 00:27:11.647 Discovery Log Page 00:27:11.647 ================== 00:27:11.647 Generation Counter: 2 00:27:11.647 Number of Records: 2 00:27:11.647 Record Format: 0 00:27:11.647 00:27:11.647 Discovery Log Entry 0 00:27:11.647 ---------------------- 00:27:11.647 Transport Type: 3 (TCP) 00:27:11.647 Address Family: 1 (IPv4) 00:27:11.647 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:11.647 Entry Flags: 00:27:11.647 Duplicate Returned Information: 0 00:27:11.647 Explicit Persistent Connection Support for Discovery: 0 00:27:11.647 Transport Requirements: 00:27:11.647 Secure Channel: Not Specified 00:27:11.647 Port ID: 1 (0x0001) 00:27:11.647 Controller ID: 65535 (0xffff) 00:27:11.647 Admin Max SQ Size: 32 00:27:11.647 Transport Service Identifier: 4420 00:27:11.647 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:11.647 Transport Address: 10.0.0.1 00:27:11.647 Discovery Log Entry 1 00:27:11.647 ---------------------- 00:27:11.647 Transport Type: 3 (TCP) 00:27:11.647 Address Family: 1 (IPv4) 00:27:11.647 Subsystem Type: 2 (NVM Subsystem) 00:27:11.647 Entry Flags: 00:27:11.647 Duplicate Returned Information: 0 00:27:11.647 Explicit Persistent Connection Support for Discovery: 0 00:27:11.647 Transport Requirements: 00:27:11.647 Secure Channel: Not Specified 00:27:11.647 Port ID: 1 (0x0001) 00:27:11.647 Controller ID: 65535 (0xffff) 00:27:11.647 Admin Max SQ Size: 32 00:27:11.647 Transport Service Identifier: 4420 00:27:11.647 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:11.647 Transport Address: 10.0.0.1 00:27:11.647 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:11.909 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.909 get_feature(0x01) failed 00:27:11.909 get_feature(0x02) failed 00:27:11.909 get_feature(0x04) failed 00:27:11.909 ===================================================== 00:27:11.909 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:11.909 ===================================================== 00:27:11.909 Controller Capabilities/Features 00:27:11.909 ================================ 00:27:11.909 Vendor ID: 0000 00:27:11.909 Subsystem Vendor ID: 0000 00:27:11.909 Serial Number: f6bdd3cef58d5b5dd723 00:27:11.909 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:11.909 Firmware Version: 6.7.0-68 00:27:11.909 Recommended Arb Burst: 6 00:27:11.909 IEEE OUI Identifier: 00 00 00 00:27:11.909 Multi-path I/O 00:27:11.909 May have multiple subsystem ports: Yes 00:27:11.909 May have multiple controllers: Yes 00:27:11.909 Associated with SR-IOV VF: No 00:27:11.909 Max Data Transfer Size: Unlimited 00:27:11.909 Max Number of Namespaces: 1024 00:27:11.909 Max Number of I/O Queues: 128 00:27:11.909 NVMe Specification Version (VS): 1.3 00:27:11.909 NVMe Specification Version (Identify): 1.3 00:27:11.909 Maximum Queue Entries: 1024 00:27:11.909 Contiguous Queues Required: No 00:27:11.909 Arbitration Mechanisms Supported 00:27:11.909 Weighted Round Robin: Not Supported 00:27:11.909 Vendor Specific: Not Supported 00:27:11.909 Reset Timeout: 7500 ms 00:27:11.909 Doorbell Stride: 4 bytes 00:27:11.909 NVM Subsystem Reset: Not Supported 00:27:11.909 Command Sets Supported 00:27:11.909 NVM Command Set: Supported 00:27:11.909 Boot Partition: Not Supported 00:27:11.909 Memory Page Size Minimum: 4096 bytes 00:27:11.909 Memory Page Size Maximum: 4096 bytes 00:27:11.909 Persistent Memory Region: Not Supported 00:27:11.909 Optional Asynchronous Events Supported 00:27:11.909 Namespace Attribute Notices: Supported 00:27:11.909 Firmware Activation Notices: Not Supported 00:27:11.909 ANA Change Notices: Supported 00:27:11.909 PLE Aggregate Log Change Notices: Not Supported 00:27:11.909 LBA Status Info Alert Notices: Not Supported 00:27:11.909 EGE Aggregate Log Change Notices: Not Supported 00:27:11.909 Normal NVM Subsystem Shutdown event: Not Supported 00:27:11.909 Zone Descriptor Change Notices: Not Supported 00:27:11.909 Discovery Log Change Notices: Not Supported 00:27:11.909 Controller Attributes 00:27:11.909 128-bit Host Identifier: Supported 00:27:11.909 Non-Operational Permissive Mode: Not Supported 00:27:11.909 NVM Sets: Not Supported 00:27:11.909 Read Recovery Levels: Not Supported 00:27:11.909 Endurance Groups: Not Supported 00:27:11.909 Predictable Latency Mode: Not Supported 00:27:11.909 Traffic Based Keep ALive: Supported 00:27:11.909 Namespace Granularity: Not Supported 00:27:11.909 SQ Associations: Not Supported 00:27:11.909 UUID List: Not Supported 00:27:11.909 Multi-Domain Subsystem: Not Supported 00:27:11.909 Fixed Capacity Management: Not Supported 00:27:11.909 Variable Capacity Management: Not Supported 00:27:11.909 Delete Endurance Group: Not Supported 00:27:11.909 Delete NVM Set: Not Supported 00:27:11.909 Extended LBA Formats Supported: Not Supported 00:27:11.909 Flexible Data Placement Supported: Not Supported 00:27:11.909 00:27:11.909 Controller Memory Buffer Support 00:27:11.909 ================================ 00:27:11.909 Supported: No 00:27:11.909 00:27:11.909 Persistent Memory Region Support 00:27:11.909 ================================ 00:27:11.909 Supported: No 00:27:11.909 00:27:11.909 Admin Command Set Attributes 00:27:11.909 ============================ 00:27:11.909 Security Send/Receive: Not Supported 00:27:11.909 Format NVM: Not Supported 00:27:11.909 Firmware Activate/Download: Not Supported 00:27:11.909 Namespace Management: Not Supported 00:27:11.909 Device Self-Test: Not Supported 00:27:11.909 Directives: Not Supported 00:27:11.909 NVMe-MI: Not Supported 00:27:11.909 Virtualization Management: Not Supported 00:27:11.909 Doorbell Buffer Config: Not Supported 00:27:11.909 Get LBA Status Capability: Not Supported 00:27:11.909 Command & Feature Lockdown Capability: Not Supported 00:27:11.909 Abort Command Limit: 4 00:27:11.909 Async Event Request Limit: 4 00:27:11.909 Number of Firmware Slots: N/A 00:27:11.909 Firmware Slot 1 Read-Only: N/A 00:27:11.909 Firmware Activation Without Reset: N/A 00:27:11.909 Multiple Update Detection Support: N/A 00:27:11.909 Firmware Update Granularity: No Information Provided 00:27:11.909 Per-Namespace SMART Log: Yes 00:27:11.909 Asymmetric Namespace Access Log Page: Supported 00:27:11.909 ANA Transition Time : 10 sec 00:27:11.909 00:27:11.909 Asymmetric Namespace Access Capabilities 00:27:11.909 ANA Optimized State : Supported 00:27:11.909 ANA Non-Optimized State : Supported 00:27:11.909 ANA Inaccessible State : Supported 00:27:11.909 ANA Persistent Loss State : Supported 00:27:11.909 ANA Change State : Supported 00:27:11.909 ANAGRPID is not changed : No 00:27:11.909 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:11.909 00:27:11.909 ANA Group Identifier Maximum : 128 00:27:11.909 Number of ANA Group Identifiers : 128 00:27:11.909 Max Number of Allowed Namespaces : 1024 00:27:11.909 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:11.909 Command Effects Log Page: Supported 00:27:11.909 Get Log Page Extended Data: Supported 00:27:11.909 Telemetry Log Pages: Not Supported 00:27:11.909 Persistent Event Log Pages: Not Supported 00:27:11.909 Supported Log Pages Log Page: May Support 00:27:11.909 Commands Supported & Effects Log Page: Not Supported 00:27:11.909 Feature Identifiers & Effects Log Page:May Support 00:27:11.909 NVMe-MI Commands & Effects Log Page: May Support 00:27:11.909 Data Area 4 for Telemetry Log: Not Supported 00:27:11.909 Error Log Page Entries Supported: 128 00:27:11.909 Keep Alive: Supported 00:27:11.909 Keep Alive Granularity: 1000 ms 00:27:11.909 00:27:11.909 NVM Command Set Attributes 00:27:11.909 ========================== 00:27:11.909 Submission Queue Entry Size 00:27:11.909 Max: 64 00:27:11.909 Min: 64 00:27:11.909 Completion Queue Entry Size 00:27:11.909 Max: 16 00:27:11.909 Min: 16 00:27:11.909 Number of Namespaces: 1024 00:27:11.909 Compare Command: Not Supported 00:27:11.909 Write Uncorrectable Command: Not Supported 00:27:11.909 Dataset Management Command: Supported 00:27:11.909 Write Zeroes Command: Supported 00:27:11.909 Set Features Save Field: Not Supported 00:27:11.909 Reservations: Not Supported 00:27:11.909 Timestamp: Not Supported 00:27:11.909 Copy: Not Supported 00:27:11.909 Volatile Write Cache: Present 00:27:11.909 Atomic Write Unit (Normal): 1 00:27:11.909 Atomic Write Unit (PFail): 1 00:27:11.909 Atomic Compare & Write Unit: 1 00:27:11.909 Fused Compare & Write: Not Supported 00:27:11.909 Scatter-Gather List 00:27:11.909 SGL Command Set: Supported 00:27:11.909 SGL Keyed: Not Supported 00:27:11.909 SGL Bit Bucket Descriptor: Not Supported 00:27:11.909 SGL Metadata Pointer: Not Supported 00:27:11.909 Oversized SGL: Not Supported 00:27:11.909 SGL Metadata Address: Not Supported 00:27:11.909 SGL Offset: Supported 00:27:11.909 Transport SGL Data Block: Not Supported 00:27:11.909 Replay Protected Memory Block: Not Supported 00:27:11.909 00:27:11.909 Firmware Slot Information 00:27:11.909 ========================= 00:27:11.909 Active slot: 0 00:27:11.909 00:27:11.909 Asymmetric Namespace Access 00:27:11.909 =========================== 00:27:11.910 Change Count : 0 00:27:11.910 Number of ANA Group Descriptors : 1 00:27:11.910 ANA Group Descriptor : 0 00:27:11.910 ANA Group ID : 1 00:27:11.910 Number of NSID Values : 1 00:27:11.910 Change Count : 0 00:27:11.910 ANA State : 1 00:27:11.910 Namespace Identifier : 1 00:27:11.910 00:27:11.910 Commands Supported and Effects 00:27:11.910 ============================== 00:27:11.910 Admin Commands 00:27:11.910 -------------- 00:27:11.910 Get Log Page (02h): Supported 00:27:11.910 Identify (06h): Supported 00:27:11.910 Abort (08h): Supported 00:27:11.910 Set Features (09h): Supported 00:27:11.910 Get Features (0Ah): Supported 00:27:11.910 Asynchronous Event Request (0Ch): Supported 00:27:11.910 Keep Alive (18h): Supported 00:27:11.910 I/O Commands 00:27:11.910 ------------ 00:27:11.910 Flush (00h): Supported 00:27:11.910 Write (01h): Supported LBA-Change 00:27:11.910 Read (02h): Supported 00:27:11.910 Write Zeroes (08h): Supported LBA-Change 00:27:11.910 Dataset Management (09h): Supported 00:27:11.910 00:27:11.910 Error Log 00:27:11.910 ========= 00:27:11.910 Entry: 0 00:27:11.910 Error Count: 0x3 00:27:11.910 Submission Queue Id: 0x0 00:27:11.910 Command Id: 0x5 00:27:11.910 Phase Bit: 0 00:27:11.910 Status Code: 0x2 00:27:11.910 Status Code Type: 0x0 00:27:11.910 Do Not Retry: 1 00:27:11.910 Error Location: 0x28 00:27:11.910 LBA: 0x0 00:27:11.910 Namespace: 0x0 00:27:11.910 Vendor Log Page: 0x0 00:27:11.910 ----------- 00:27:11.910 Entry: 1 00:27:11.910 Error Count: 0x2 00:27:11.910 Submission Queue Id: 0x0 00:27:11.910 Command Id: 0x5 00:27:11.910 Phase Bit: 0 00:27:11.910 Status Code: 0x2 00:27:11.910 Status Code Type: 0x0 00:27:11.910 Do Not Retry: 1 00:27:11.910 Error Location: 0x28 00:27:11.910 LBA: 0x0 00:27:11.910 Namespace: 0x0 00:27:11.910 Vendor Log Page: 0x0 00:27:11.910 ----------- 00:27:11.910 Entry: 2 00:27:11.910 Error Count: 0x1 00:27:11.910 Submission Queue Id: 0x0 00:27:11.910 Command Id: 0x4 00:27:11.910 Phase Bit: 0 00:27:11.910 Status Code: 0x2 00:27:11.910 Status Code Type: 0x0 00:27:11.910 Do Not Retry: 1 00:27:11.910 Error Location: 0x28 00:27:11.910 LBA: 0x0 00:27:11.910 Namespace: 0x0 00:27:11.910 Vendor Log Page: 0x0 00:27:11.910 00:27:11.910 Number of Queues 00:27:11.910 ================ 00:27:11.910 Number of I/O Submission Queues: 128 00:27:11.910 Number of I/O Completion Queues: 128 00:27:11.910 00:27:11.910 ZNS Specific Controller Data 00:27:11.910 ============================ 00:27:11.910 Zone Append Size Limit: 0 00:27:11.910 00:27:11.910 00:27:11.910 Active Namespaces 00:27:11.910 ================= 00:27:11.910 get_feature(0x05) failed 00:27:11.910 Namespace ID:1 00:27:11.910 Command Set Identifier: NVM (00h) 00:27:11.910 Deallocate: Supported 00:27:11.910 Deallocated/Unwritten Error: Not Supported 00:27:11.910 Deallocated Read Value: Unknown 00:27:11.910 Deallocate in Write Zeroes: Not Supported 00:27:11.910 Deallocated Guard Field: 0xFFFF 00:27:11.910 Flush: Supported 00:27:11.910 Reservation: Not Supported 00:27:11.910 Namespace Sharing Capabilities: Multiple Controllers 00:27:11.910 Size (in LBAs): 3750748848 (1788GiB) 00:27:11.910 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:11.910 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:11.910 UUID: 73c513ca-c020-49e6-bc93-64f8a01db9d0 00:27:11.910 Thin Provisioning: Not Supported 00:27:11.910 Per-NS Atomic Units: Yes 00:27:11.910 Atomic Write Unit (Normal): 8 00:27:11.910 Atomic Write Unit (PFail): 8 00:27:11.910 Preferred Write Granularity: 8 00:27:11.910 Atomic Compare & Write Unit: 8 00:27:11.910 Atomic Boundary Size (Normal): 0 00:27:11.910 Atomic Boundary Size (PFail): 0 00:27:11.910 Atomic Boundary Offset: 0 00:27:11.910 NGUID/EUI64 Never Reused: No 00:27:11.910 ANA group ID: 1 00:27:11.910 Namespace Write Protected: No 00:27:11.910 Number of LBA Formats: 1 00:27:11.910 Current LBA Format: LBA Format #00 00:27:11.910 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:11.910 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:11.910 rmmod nvme_tcp 00:27:11.910 rmmod nvme_fabrics 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:11.910 09:41:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.458 09:41:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:14.458 09:41:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:14.458 09:41:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:14.458 09:41:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:14.458 09:41:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:14.458 09:41:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:14.458 09:41:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:14.458 09:41:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:14.458 09:41:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:14.458 09:41:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:14.458 09:41:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:17.837 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:17.837 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:17.837 00:27:17.837 real 0m18.388s 00:27:17.837 user 0m5.019s 00:27:17.837 sys 0m10.315s 00:27:17.837 09:41:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:17.837 09:41:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:17.837 ************************************ 00:27:17.837 END TEST nvmf_identify_kernel_target 00:27:17.837 ************************************ 00:27:17.837 09:41:49 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:17.837 09:41:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:17.837 09:41:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:17.837 09:41:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.837 ************************************ 00:27:17.837 START TEST nvmf_auth_host 00:27:17.837 ************************************ 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:17.837 * Looking for test storage... 00:27:17.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.837 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:17.838 09:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.443 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:24.444 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:24.444 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:24.444 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:24.444 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.444 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.706 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.706 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.706 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:24.706 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.706 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.706 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.706 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:24.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.815 ms 00:27:24.967 00:27:24.967 --- 10.0.0.2 ping statistics --- 00:27:24.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.967 rtt min/avg/max/mdev = 0.815/0.815/0.815/0.000 ms 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:27:24.967 00:27:24.967 --- 10.0.0.1 ping statistics --- 00:27:24.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.967 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1297593 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1297593 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 1297593 ']' 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.967 09:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a7e41b4cd8e02133ce567c5bb91e1638 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.76J 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a7e41b4cd8e02133ce567c5bb91e1638 0 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a7e41b4cd8e02133ce567c5bb91e1638 0 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a7e41b4cd8e02133ce567c5bb91e1638 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.76J 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.76J 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.76J 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8db61204856e1f05e74948180e0f11dce98f28100e1b7c58772d1f17a44ee458 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.se0 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8db61204856e1f05e74948180e0f11dce98f28100e1b7c58772d1f17a44ee458 3 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8db61204856e1f05e74948180e0f11dce98f28100e1b7c58772d1f17a44ee458 3 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8db61204856e1f05e74948180e0f11dce98f28100e1b7c58772d1f17a44ee458 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.se0 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.se0 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.se0 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=85f98ac24528faedac671dad95b36568b87b5289eb7d144a 00:27:25.913 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:25.914 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.O2U 00:27:25.914 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 85f98ac24528faedac671dad95b36568b87b5289eb7d144a 0 00:27:25.914 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 85f98ac24528faedac671dad95b36568b87b5289eb7d144a 0 00:27:25.914 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:25.914 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:25.914 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=85f98ac24528faedac671dad95b36568b87b5289eb7d144a 00:27:25.914 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:25.914 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.O2U 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.O2U 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.O2U 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=08fb5681cf1aeb76e2c89de5ec7a82517e7ce21d03dd3b2d 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Nvi 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 08fb5681cf1aeb76e2c89de5ec7a82517e7ce21d03dd3b2d 2 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 08fb5681cf1aeb76e2c89de5ec7a82517e7ce21d03dd3b2d 2 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=08fb5681cf1aeb76e2c89de5ec7a82517e7ce21d03dd3b2d 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Nvi 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Nvi 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Nvi 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b60ded0b500481464ec1c803b446e80d 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BdS 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b60ded0b500481464ec1c803b446e80d 1 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b60ded0b500481464ec1c803b446e80d 1 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b60ded0b500481464ec1c803b446e80d 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:26.175 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BdS 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BdS 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.BdS 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ab752239d09149f428245c2d96ebe0ab 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.srH 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ab752239d09149f428245c2d96ebe0ab 1 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ab752239d09149f428245c2d96ebe0ab 1 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ab752239d09149f428245c2d96ebe0ab 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.srH 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.srH 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.srH 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4d83a135045c7b8b14497461dc38b56b29fda89d10f2adc3 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4Uy 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4d83a135045c7b8b14497461dc38b56b29fda89d10f2adc3 2 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4d83a135045c7b8b14497461dc38b56b29fda89d10f2adc3 2 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4d83a135045c7b8b14497461dc38b56b29fda89d10f2adc3 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:26.176 09:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4Uy 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4Uy 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.4Uy 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9d8d70e078c8ba5001b3bf1774f3edda 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gCc 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9d8d70e078c8ba5001b3bf1774f3edda 0 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9d8d70e078c8ba5001b3bf1774f3edda 0 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9d8d70e078c8ba5001b3bf1774f3edda 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gCc 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gCc 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.gCc 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4580eb2c49f6d844fa289c81bce597cd6061bf8c0a24edd988d93c28c3924c96 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.20w 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4580eb2c49f6d844fa289c81bce597cd6061bf8c0a24edd988d93c28c3924c96 3 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4580eb2c49f6d844fa289c81bce597cd6061bf8c0a24edd988d93c28c3924c96 3 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4580eb2c49f6d844fa289c81bce597cd6061bf8c0a24edd988d93c28c3924c96 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.20w 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.20w 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.20w 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1297593 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 1297593 ']' 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:26.438 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.76J 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.se0 ]] 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.se0 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.O2U 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Nvi ]] 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Nvi 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.BdS 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.srH ]] 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.srH 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.700 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4Uy 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.gCc ]] 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.gCc 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.20w 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:26.701 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:26.962 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:26.963 09:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:30.267 Waiting for block devices as requested 00:27:30.267 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:30.267 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:30.267 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:30.267 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:30.527 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:30.527 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:30.527 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:30.788 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:30.788 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:31.048 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:31.048 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:31.048 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:31.049 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:31.309 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:31.309 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:31.309 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:31.309 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:32.252 No valid GPT data, bailing 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:32.252 00:27:32.252 Discovery Log Number of Records 2, Generation counter 2 00:27:32.252 =====Discovery Log Entry 0====== 00:27:32.252 trtype: tcp 00:27:32.252 adrfam: ipv4 00:27:32.252 subtype: current discovery subsystem 00:27:32.252 treq: not specified, sq flow control disable supported 00:27:32.252 portid: 1 00:27:32.252 trsvcid: 4420 00:27:32.252 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:32.252 traddr: 10.0.0.1 00:27:32.252 eflags: none 00:27:32.252 sectype: none 00:27:32.252 =====Discovery Log Entry 1====== 00:27:32.252 trtype: tcp 00:27:32.252 adrfam: ipv4 00:27:32.252 subtype: nvme subsystem 00:27:32.252 treq: not specified, sq flow control disable supported 00:27:32.252 portid: 1 00:27:32.252 trsvcid: 4420 00:27:32.252 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:32.252 traddr: 10.0.0.1 00:27:32.252 eflags: none 00:27:32.252 sectype: none 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.252 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.253 nvme0n1 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.253 09:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.253 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.514 nvme0n1 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.514 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.515 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.776 nvme0n1 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:32.776 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.777 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.038 nvme0n1 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.038 nvme0n1 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.038 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.039 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.039 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.039 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.300 09:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.301 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.301 09:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.301 nvme0n1 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.301 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.562 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.562 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.562 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.562 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.562 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.562 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.562 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.562 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.562 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.562 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.563 nvme0n1 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.563 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.824 nvme0n1 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:33.824 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.825 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.086 nvme0n1 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.086 09:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.347 nvme0n1 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.347 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.348 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.608 nvme0n1 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.608 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.609 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.609 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.609 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.869 nvme0n1 00:27:34.869 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.869 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.869 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.869 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.869 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.869 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.869 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.130 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.391 nvme0n1 00:27:35.391 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.391 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.391 09:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.391 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.391 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.391 09:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:35.391 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.392 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.687 nvme0n1 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.687 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.688 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.688 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:35.688 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.688 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.958 nvme0n1 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.958 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.219 nvme0n1 00:27:36.219 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.219 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.219 09:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.219 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.220 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.220 09:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.220 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.220 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.220 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.220 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.481 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.742 nvme0n1 00:27:36.742 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.742 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.742 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.742 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.742 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.742 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.742 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.742 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.742 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.742 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.003 09:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.264 nvme0n1 00:27:37.264 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.264 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.264 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.264 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.264 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.264 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.264 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.264 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.264 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.264 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.525 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.525 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.525 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:37.525 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.525 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.525 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.526 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.787 nvme0n1 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.787 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.048 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.048 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.048 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.048 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.048 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.048 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.048 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.048 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.048 09:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.048 09:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.048 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.048 09:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.309 nvme0n1 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.309 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.569 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.569 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.569 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.569 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.569 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.569 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.569 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.569 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.569 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.569 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.850 nvme0n1 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.850 09:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.791 nvme0n1 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.791 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.792 09:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.735 nvme0n1 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.735 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.306 nvme0n1 00:27:41.306 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.306 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.306 09:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.306 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.306 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.306 09:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.306 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.248 nvme0n1 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:42.248 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.249 09:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.819 nvme0n1 00:27:42.819 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.819 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.819 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.819 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.819 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.819 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:43.080 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.081 nvme0n1 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.081 09:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.341 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.341 09:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.341 nvme0n1 00:27:43.341 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.341 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.341 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.341 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.342 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.603 nvme0n1 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.603 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.863 nvme0n1 00:27:43.863 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.863 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.863 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.863 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.863 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.863 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.863 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.863 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.863 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.863 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.863 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.863 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.863 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:43.863 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.864 nvme0n1 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.864 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.124 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.125 nvme0n1 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.125 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.385 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.386 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.386 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.386 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.386 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.386 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.386 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.386 09:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.386 09:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.386 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.386 09:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.386 nvme0n1 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.386 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.647 nvme0n1 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:44.647 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.648 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.908 nvme0n1 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.908 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.168 nvme0n1 00:27:45.168 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.169 09:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.429 nvme0n1 00:27:45.429 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.429 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.429 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.429 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.429 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.429 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.690 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.951 nvme0n1 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.951 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.213 nvme0n1 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.213 09:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.474 nvme0n1 00:27:46.474 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.474 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.474 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.474 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.475 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.736 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.997 nvme0n1 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.997 09:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.258 nvme0n1 00:27:47.258 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.519 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.520 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.520 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.520 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.520 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.520 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.520 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.520 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.520 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.520 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.781 nvme0n1 00:27:47.781 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.781 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.781 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.781 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.781 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.042 09:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.304 nvme0n1 00:27:48.304 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.304 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.304 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.304 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.304 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.304 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.565 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.826 nvme0n1 00:27:48.826 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.827 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.827 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.827 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.827 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.827 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.087 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.088 09:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.393 nvme0n1 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.393 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.681 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.253 nvme0n1 00:27:50.253 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.253 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.253 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.253 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.253 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.253 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.253 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.253 09:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.253 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.253 09:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.253 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.214 nvme0n1 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.214 09:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.786 nvme0n1 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.786 09:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.729 nvme0n1 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.729 09:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.301 nvme0n1 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.301 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.562 nvme0n1 00:27:53.562 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.563 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.824 nvme0n1 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.824 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.085 nvme0n1 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.085 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.346 nvme0n1 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.346 09:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.346 nvme0n1 00:27:54.346 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.346 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.346 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.346 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.346 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.346 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:27:54.607 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.608 nvme0n1 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.608 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.869 nvme0n1 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.869 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.130 nvme0n1 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.130 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.392 09:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.392 nvme0n1 00:27:55.392 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.392 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.392 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.392 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.392 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.392 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.392 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.392 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.392 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.392 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.653 nvme0n1 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.653 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.914 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.175 nvme0n1 00:27:56.175 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.175 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.175 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.175 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.175 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.175 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.175 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.175 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.175 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.176 09:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.438 nvme0n1 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.438 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.700 nvme0n1 00:27:56.700 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.700 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.700 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.700 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.700 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.700 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.700 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.700 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.700 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.700 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.961 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.962 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.962 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.962 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.223 nvme0n1 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.223 09:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.485 nvme0n1 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.485 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.057 nvme0n1 00:27:58.057 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.057 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.058 09:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.630 nvme0n1 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.630 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.631 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.631 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.631 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.631 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.631 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.631 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.631 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.631 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.631 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.631 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.631 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.631 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.631 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.203 nvme0n1 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.203 09:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.464 nvme0n1 00:27:59.464 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.464 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.464 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.464 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.464 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.464 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.725 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.986 nvme0n1 00:27:59.986 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.986 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.986 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.986 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.986 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.246 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.246 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTdlNDFiNGNkOGUwMjEzM2NlNTY3YzViYjkxZTE2Mzgje0H0: 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: ]] 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGRiNjEyMDQ4NTZlMWYwNWU3NDk0ODE4MGUwZjExZGNlOThmMjgxMDBlMWI3YzU4NzcyZDFmMTdhNDRlZTQ1OFOaeqA=: 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.247 09:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.835 nvme0n1 00:28:00.835 09:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.835 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.835 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.835 09:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.835 09:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.835 09:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.835 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.835 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.835 09:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.835 09:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.835 09:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.096 09:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.668 nvme0n1 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.668 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjYwZGVkMGI1MDA0ODE0NjRlYzFjODAzYjQ0NmU4MGQmZpi8: 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: ]] 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI3NTIyMzlkMDkxNDlmNDI4MjQ1YzJkOTZlYmUwYWJCzvUa: 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.669 09:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.379 nvme0n1 00:28:02.379 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.379 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.379 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.379 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.379 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.379 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.379 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.379 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.379 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.379 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQ4M2ExMzUwNDVjN2I4YjE0NDk3NDYxZGMzOGI1NmIyOWZkYTg5ZDEwZjJhZGMzrExhfQ==: 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: ]] 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWQ4ZDcwZTA3OGM4YmE1MDAxYjNiZjE3NzRmM2VkZGHTJgBT: 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.640 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.212 nvme0n1 00:28:03.212 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.212 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.212 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.212 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.212 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.212 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.212 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.212 09:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.212 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.212 09:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.212 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.212 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDU4MGViMmM0OWY2ZDg0NGZhMjg5YzgxYmNlNTk3Y2Q2MDYxYmY4YzBhMjRlZGQ5ODhkOTNjMjhjMzkyNGM5NivAAdU=: 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.213 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.474 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.046 nvme0n1 00:28:04.046 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.046 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.046 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.046 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.046 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.046 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.046 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.046 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODVmOThhYzI0NTI4ZmFlZGFjNjcxZGFkOTViMzY1NjhiODdiNTI4OWViN2QxNDRh550lCw==: 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: ]] 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDhmYjU2ODFjZjFhZWI3NmUyYzg5ZGU1ZWM3YTgyNTE3ZTdjZTIxZDAzZGQzYjJkOhVF1Q==: 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.047 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.309 request: 00:28:04.309 { 00:28:04.309 "name": "nvme0", 00:28:04.309 "trtype": "tcp", 00:28:04.309 "traddr": "10.0.0.1", 00:28:04.309 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:04.309 "adrfam": "ipv4", 00:28:04.309 "trsvcid": "4420", 00:28:04.309 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:04.309 "method": "bdev_nvme_attach_controller", 00:28:04.309 "req_id": 1 00:28:04.309 } 00:28:04.309 Got JSON-RPC error response 00:28:04.309 response: 00:28:04.309 { 00:28:04.309 "code": -5, 00:28:04.309 "message": "Input/output error" 00:28:04.309 } 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.309 request: 00:28:04.309 { 00:28:04.309 "name": "nvme0", 00:28:04.309 "trtype": "tcp", 00:28:04.309 "traddr": "10.0.0.1", 00:28:04.309 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:04.309 "adrfam": "ipv4", 00:28:04.309 "trsvcid": "4420", 00:28:04.309 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:04.309 "dhchap_key": "key2", 00:28:04.309 "method": "bdev_nvme_attach_controller", 00:28:04.309 "req_id": 1 00:28:04.309 } 00:28:04.309 Got JSON-RPC error response 00:28:04.309 response: 00:28:04.309 { 00:28:04.309 "code": -5, 00:28:04.309 "message": "Input/output error" 00:28:04.309 } 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.309 09:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.309 request: 00:28:04.309 { 00:28:04.309 "name": "nvme0", 00:28:04.309 "trtype": "tcp", 00:28:04.309 "traddr": "10.0.0.1", 00:28:04.309 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:04.309 "adrfam": "ipv4", 00:28:04.309 "trsvcid": "4420", 00:28:04.309 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:04.309 "dhchap_key": "key1", 00:28:04.309 "dhchap_ctrlr_key": "ckey2", 00:28:04.309 "method": "bdev_nvme_attach_controller", 00:28:04.309 "req_id": 1 00:28:04.309 } 00:28:04.309 Got JSON-RPC error response 00:28:04.309 response: 00:28:04.309 { 00:28:04.309 "code": -5, 00:28:04.309 "message": "Input/output error" 00:28:04.309 } 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:04.309 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:04.309 rmmod nvme_tcp 00:28:04.309 rmmod nvme_fabrics 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1297593 ']' 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1297593 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 1297593 ']' 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 1297593 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1297593 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1297593' 00:28:04.570 killing process with pid 1297593 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 1297593 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 1297593 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.570 09:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.120 09:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:07.120 09:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:07.120 09:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:07.120 09:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:07.120 09:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:07.120 09:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:07.120 09:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:07.120 09:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:07.120 09:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:07.120 09:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:07.120 09:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:07.120 09:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:07.120 09:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:10.425 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:10.425 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:10.425 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.76J /tmp/spdk.key-null.O2U /tmp/spdk.key-sha256.BdS /tmp/spdk.key-sha384.4Uy /tmp/spdk.key-sha512.20w /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:10.425 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:13.732 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:13.732 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:13.732 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:13.995 00:28:13.995 real 0m56.096s 00:28:13.995 user 0m50.567s 00:28:13.995 sys 0m14.452s 00:28:13.995 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:13.995 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.995 ************************************ 00:28:13.995 END TEST nvmf_auth_host 00:28:13.995 ************************************ 00:28:13.995 09:42:45 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:28:13.995 09:42:45 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:13.995 09:42:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:13.995 09:42:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:13.995 09:42:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:13.995 ************************************ 00:28:13.995 START TEST nvmf_digest 00:28:13.995 ************************************ 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:13.995 * Looking for test storage... 00:28:13.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.995 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:13.996 09:42:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.142 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:22.143 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:22.143 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:22.143 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:22.143 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:28:22.143 00:28:22.143 --- 10.0.0.2 ping statistics --- 00:28:22.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.143 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:28:22.143 00:28:22.143 --- 10.0.0.1 ping statistics --- 00:28:22.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.143 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:22.143 ************************************ 00:28:22.143 START TEST nvmf_digest_clean 00:28:22.143 ************************************ 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1314312 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1314312 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1314312 ']' 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:22.143 09:42:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.143 [2024-06-11 09:42:53.010398] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:28:22.143 [2024-06-11 09:42:53.010457] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.143 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.143 [2024-06-11 09:42:53.097664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.143 [2024-06-11 09:42:53.191477] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.144 [2024-06-11 09:42:53.191552] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.144 [2024-06-11 09:42:53.191560] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.144 [2024-06-11 09:42:53.191567] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.144 [2024-06-11 09:42:53.191573] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.144 [2024-06-11 09:42:53.191605] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.144 09:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:22.144 09:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:22.144 09:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:22.144 09:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:22.144 09:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.144 09:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.144 09:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:22.144 09:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:22.144 09:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:22.144 09:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:22.144 09:42:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.405 null0 00:28:22.405 [2024-06-11 09:42:54.036403] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.405 [2024-06-11 09:42:54.060652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1314621 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1314621 /var/tmp/bperf.sock 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1314621 ']' 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:22.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:22.405 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.405 [2024-06-11 09:42:54.117144] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:28:22.405 [2024-06-11 09:42:54.117207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1314621 ] 00:28:22.405 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.405 [2024-06-11 09:42:54.181425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.666 [2024-06-11 09:42:54.255891] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.666 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:22.666 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:22.666 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:22.666 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:22.666 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:22.927 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.927 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.187 nvme0n1 00:28:23.187 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:23.187 09:42:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:23.187 Running I/O for 2 seconds... 00:28:25.732 00:28:25.732 Latency(us) 00:28:25.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.732 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:25.732 nvme0n1 : 2.05 20193.09 78.88 0.00 0.00 6220.70 2894.51 46093.65 00:28:25.732 =================================================================================================================== 00:28:25.732 Total : 20193.09 78.88 0.00 0.00 6220.70 2894.51 46093.65 00:28:25.732 0 00:28:25.732 09:42:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:25.732 09:42:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:25.732 | select(.opcode=="crc32c") 00:28:25.732 | "\(.module_name) \(.executed)"' 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1314621 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1314621 ']' 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1314621 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1314621 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1314621' 00:28:25.732 killing process with pid 1314621 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1314621 00:28:25.732 Received shutdown signal, test time was about 2.000000 seconds 00:28:25.732 00:28:25.732 Latency(us) 00:28:25.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.732 =================================================================================================================== 00:28:25.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:25.732 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1314621 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1315194 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1315194 /var/tmp/bperf.sock 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1315194 ']' 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:25.733 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:25.733 [2024-06-11 09:42:57.457929] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:28:25.733 [2024-06-11 09:42:57.458003] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1315194 ] 00:28:25.733 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:25.733 Zero copy mechanism will not be used. 00:28:25.733 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.733 [2024-06-11 09:42:57.518003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.993 [2024-06-11 09:42:57.582267] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.993 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:25.993 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:25.993 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:25.993 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:25.993 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:26.254 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.254 09:42:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.514 nvme0n1 00:28:26.514 09:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:26.514 09:42:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:26.514 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:26.514 Zero copy mechanism will not be used. 00:28:26.514 Running I/O for 2 seconds... 00:28:29.061 00:28:29.061 Latency(us) 00:28:29.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.061 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:29.061 nvme0n1 : 2.00 2650.35 331.29 0.00 0.00 6032.73 3850.24 10158.08 00:28:29.061 =================================================================================================================== 00:28:29.061 Total : 2650.35 331.29 0.00 0.00 6032.73 3850.24 10158.08 00:28:29.061 0 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:29.061 | select(.opcode=="crc32c") 00:28:29.061 | "\(.module_name) \(.executed)"' 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1315194 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1315194 ']' 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1315194 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1315194 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1315194' 00:28:29.061 killing process with pid 1315194 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1315194 00:28:29.061 Received shutdown signal, test time was about 2.000000 seconds 00:28:29.061 00:28:29.061 Latency(us) 00:28:29.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.061 =================================================================================================================== 00:28:29.061 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1315194 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1315774 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1315774 /var/tmp/bperf.sock 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1315774 ']' 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:29.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:29.061 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:29.061 [2024-06-11 09:43:00.730779] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:28:29.061 [2024-06-11 09:43:00.730833] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1315774 ] 00:28:29.061 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.061 [2024-06-11 09:43:00.789630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.061 [2024-06-11 09:43:00.853295] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.322 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:29.322 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:29.322 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:29.322 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:29.322 09:43:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:29.583 09:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.583 09:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.843 nvme0n1 00:28:29.843 09:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:29.843 09:43:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:29.843 Running I/O for 2 seconds... 00:28:31.756 00:28:31.756 Latency(us) 00:28:31.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.756 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:31.756 nvme0n1 : 2.01 22030.67 86.06 0.00 0.00 5801.33 3126.61 16274.77 00:28:31.757 =================================================================================================================== 00:28:31.757 Total : 22030.67 86.06 0.00 0.00 5801.33 3126.61 16274.77 00:28:31.757 0 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:32.017 | select(.opcode=="crc32c") 00:28:32.017 | "\(.module_name) \(.executed)"' 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1315774 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1315774 ']' 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1315774 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:32.017 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1315774 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1315774' 00:28:32.278 killing process with pid 1315774 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1315774 00:28:32.278 Received shutdown signal, test time was about 2.000000 seconds 00:28:32.278 00:28:32.278 Latency(us) 00:28:32.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.278 =================================================================================================================== 00:28:32.278 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1315774 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1316372 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1316372 /var/tmp/bperf.sock 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1316372 ']' 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:32.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:32.278 09:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:32.278 [2024-06-11 09:43:04.029114] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:28:32.278 [2024-06-11 09:43:04.029182] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1316372 ] 00:28:32.278 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:32.278 Zero copy mechanism will not be used. 00:28:32.278 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.278 [2024-06-11 09:43:04.087718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.539 [2024-06-11 09:43:04.151671] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.539 09:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:32.539 09:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:32.539 09:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:32.539 09:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:32.539 09:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:32.799 09:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.799 09:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.059 nvme0n1 00:28:33.059 09:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:33.059 09:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:33.059 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:33.059 Zero copy mechanism will not be used. 00:28:33.059 Running I/O for 2 seconds... 00:28:35.607 00:28:35.607 Latency(us) 00:28:35.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.607 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:35.607 nvme0n1 : 2.00 5492.67 686.58 0.00 0.00 2907.16 2280.11 14417.92 00:28:35.607 =================================================================================================================== 00:28:35.607 Total : 5492.67 686.58 0.00 0.00 2907.16 2280.11 14417.92 00:28:35.607 0 00:28:35.607 09:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:35.607 09:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:35.607 09:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:35.607 09:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:35.607 09:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:35.607 | select(.opcode=="crc32c") 00:28:35.607 | "\(.module_name) \(.executed)"' 00:28:35.607 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:35.607 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:35.607 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:35.607 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:35.607 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1316372 00:28:35.607 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1316372 ']' 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1316372 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1316372 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1316372' 00:28:35.608 killing process with pid 1316372 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1316372 00:28:35.608 Received shutdown signal, test time was about 2.000000 seconds 00:28:35.608 00:28:35.608 Latency(us) 00:28:35.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.608 =================================================================================================================== 00:28:35.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1316372 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1314312 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1314312 ']' 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1314312 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1314312 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1314312' 00:28:35.608 killing process with pid 1314312 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1314312 00:28:35.608 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1314312 00:28:35.869 00:28:35.869 real 0m14.535s 00:28:35.869 user 0m28.894s 00:28:35.869 sys 0m3.166s 00:28:35.869 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:35.869 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:35.869 ************************************ 00:28:35.869 END TEST nvmf_digest_clean 00:28:35.869 ************************************ 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:35.870 ************************************ 00:28:35.870 START TEST nvmf_digest_error 00:28:35.870 ************************************ 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1317087 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1317087 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1317087 ']' 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:35.870 09:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.870 [2024-06-11 09:43:07.622678] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:28:35.870 [2024-06-11 09:43:07.622733] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.870 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.131 [2024-06-11 09:43:07.704560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.131 [2024-06-11 09:43:07.778128] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.131 [2024-06-11 09:43:07.778169] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.131 [2024-06-11 09:43:07.778176] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.131 [2024-06-11 09:43:07.778183] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.131 [2024-06-11 09:43:07.778188] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.131 [2024-06-11 09:43:07.778212] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.706 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:36.706 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:36.706 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:36.706 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:36.706 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.706 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.706 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.027 [2024-06-11 09:43:08.524353] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.027 null0 00:28:37.027 [2024-06-11 09:43:08.605000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.027 [2024-06-11 09:43:08.629186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1317428 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1317428 /var/tmp/bperf.sock 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1317428 ']' 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:37.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:37.027 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.027 [2024-06-11 09:43:08.681624] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:28:37.027 [2024-06-11 09:43:08.681674] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1317428 ] 00:28:37.027 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.027 [2024-06-11 09:43:08.740365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.027 [2024-06-11 09:43:08.804767] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.303 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:37.303 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:37.303 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:37.303 09:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:37.303 09:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:37.303 09:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:37.303 09:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.303 09:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:37.303 09:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.303 09:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.876 nvme0n1 00:28:37.876 09:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:37.876 09:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:37.876 09:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.876 09:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:37.876 09:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:37.876 09:43:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:37.876 Running I/O for 2 seconds... 00:28:37.876 [2024-06-11 09:43:09.566210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:37.876 [2024-06-11 09:43:09.566248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.876 [2024-06-11 09:43:09.566260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.876 [2024-06-11 09:43:09.581376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:37.876 [2024-06-11 09:43:09.581401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.876 [2024-06-11 09:43:09.581410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.876 [2024-06-11 09:43:09.594585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:37.876 [2024-06-11 09:43:09.594609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.876 [2024-06-11 09:43:09.594618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.876 [2024-06-11 09:43:09.607290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:37.876 [2024-06-11 09:43:09.607313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.876 [2024-06-11 09:43:09.607328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.876 [2024-06-11 09:43:09.619218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:37.876 [2024-06-11 09:43:09.619240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.876 [2024-06-11 09:43:09.619249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.876 [2024-06-11 09:43:09.631428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:37.876 [2024-06-11 09:43:09.631449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.876 [2024-06-11 09:43:09.631458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.876 [2024-06-11 09:43:09.644380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:37.876 [2024-06-11 09:43:09.644402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.876 [2024-06-11 09:43:09.644411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.876 [2024-06-11 09:43:09.657618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:37.876 [2024-06-11 09:43:09.657639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.876 [2024-06-11 09:43:09.657648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.876 [2024-06-11 09:43:09.669365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:37.876 [2024-06-11 09:43:09.669386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.876 [2024-06-11 09:43:09.669395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.876 [2024-06-11 09:43:09.683589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:37.876 [2024-06-11 09:43:09.683611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.876 [2024-06-11 09:43:09.683620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.694395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.694417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.694425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.708599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.708621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.708630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.721692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.721714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.721722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.732730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.732751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.732759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.746194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.746216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.746224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.758325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.758345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.758358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.769906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.769927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.769935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.782052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.782072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.782081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.795360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.795381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.795390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.805451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.805472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.805481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.819192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.819213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.819222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.831851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.831871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.831880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.843635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.843656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.843665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.857956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.857977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.857986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.867973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.867993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.868002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.882544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.882566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.882574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.896752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.896772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.896781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.909344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.138 [2024-06-11 09:43:09.909365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.138 [2024-06-11 09:43:09.909373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.138 [2024-06-11 09:43:09.921666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.139 [2024-06-11 09:43:09.921686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.139 [2024-06-11 09:43:09.921695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.139 [2024-06-11 09:43:09.933029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.139 [2024-06-11 09:43:09.933050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.139 [2024-06-11 09:43:09.933058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.139 [2024-06-11 09:43:09.945726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.139 [2024-06-11 09:43:09.945747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.139 [2024-06-11 09:43:09.945755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.400 [2024-06-11 09:43:09.959044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.400 [2024-06-11 09:43:09.959064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.400 [2024-06-11 09:43:09.959073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.400 [2024-06-11 09:43:09.969648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.400 [2024-06-11 09:43:09.969671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:09.969685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:09.986215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:09.986237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:09.986246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:09.999845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:09.999866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:09.999874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.011467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.011490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.011499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.025310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.025335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.025345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.037411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.037431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.037440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.049173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.049193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.049202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.062968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.062989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.062998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.075706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.075727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.075736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.087166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.087192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.087201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.099640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.099661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.099670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.111873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.111893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.111902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.124659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.124679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.124688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.136218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.136239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.136247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.148394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.148414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.148423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.161020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.161040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.161049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.172769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.172789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.172798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.184787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.184807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.184816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.196715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.196736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.196744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.401 [2024-06-11 09:43:10.208726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.401 [2024-06-11 09:43:10.208747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.401 [2024-06-11 09:43:10.208756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.662 [2024-06-11 09:43:10.222031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.662 [2024-06-11 09:43:10.222051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-06-11 09:43:10.222060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.662 [2024-06-11 09:43:10.234159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.662 [2024-06-11 09:43:10.234179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-06-11 09:43:10.234188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.662 [2024-06-11 09:43:10.246541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.662 [2024-06-11 09:43:10.246562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-06-11 09:43:10.246570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.662 [2024-06-11 09:43:10.257806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.662 [2024-06-11 09:43:10.257827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-06-11 09:43:10.257835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.662 [2024-06-11 09:43:10.271015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.662 [2024-06-11 09:43:10.271035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-06-11 09:43:10.271043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.662 [2024-06-11 09:43:10.280907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.662 [2024-06-11 09:43:10.280927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-06-11 09:43:10.280936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.662 [2024-06-11 09:43:10.294819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.662 [2024-06-11 09:43:10.294839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-06-11 09:43:10.294851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.662 [2024-06-11 09:43:10.307039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.662 [2024-06-11 09:43:10.307060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-06-11 09:43:10.307069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.662 [2024-06-11 09:43:10.318619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.662 [2024-06-11 09:43:10.318639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-06-11 09:43:10.318647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.662 [2024-06-11 09:43:10.332127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.662 [2024-06-11 09:43:10.332147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-06-11 09:43:10.332155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.662 [2024-06-11 09:43:10.346036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.662 [2024-06-11 09:43:10.346057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.662 [2024-06-11 09:43:10.346066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.662 [2024-06-11 09:43:10.360183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.662 [2024-06-11 09:43:10.360203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-06-11 09:43:10.360211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.663 [2024-06-11 09:43:10.371213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.663 [2024-06-11 09:43:10.371234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-06-11 09:43:10.371242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.663 [2024-06-11 09:43:10.386024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.663 [2024-06-11 09:43:10.386044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-06-11 09:43:10.386052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.663 [2024-06-11 09:43:10.398809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.663 [2024-06-11 09:43:10.398830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-06-11 09:43:10.398838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.663 [2024-06-11 09:43:10.409564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.663 [2024-06-11 09:43:10.409588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-06-11 09:43:10.409596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.663 [2024-06-11 09:43:10.422017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.663 [2024-06-11 09:43:10.422038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-06-11 09:43:10.422046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.663 [2024-06-11 09:43:10.435734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.663 [2024-06-11 09:43:10.435755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-06-11 09:43:10.435763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.663 [2024-06-11 09:43:10.448095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.663 [2024-06-11 09:43:10.448116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-06-11 09:43:10.448124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.663 [2024-06-11 09:43:10.460342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.663 [2024-06-11 09:43:10.460362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-06-11 09:43:10.460370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.663 [2024-06-11 09:43:10.473016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.663 [2024-06-11 09:43:10.473036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.663 [2024-06-11 09:43:10.473045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.484950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.484971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.484980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.495786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.495806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.495814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.508681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.508701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.508714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.521032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.521052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.521060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.533125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.533145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.533154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.546194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.546215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.546223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.556901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.556922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.556930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.570176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.570196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.570205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.582524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.582545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.582553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.594107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.594127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.594136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.607816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.607837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.607845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.620642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.620666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.620675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.631578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.631598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.631607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.645041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.645063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.645073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.657242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.657262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.657270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.669786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.669807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.669815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.684296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.684321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.684329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.696551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.696572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.696580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.708337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.708358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.708366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.925 [2024-06-11 09:43:10.721504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.925 [2024-06-11 09:43:10.721525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.925 [2024-06-11 09:43:10.721533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.926 [2024-06-11 09:43:10.734591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:38.926 [2024-06-11 09:43:10.734612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.926 [2024-06-11 09:43:10.734621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.186 [2024-06-11 09:43:10.745520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.186 [2024-06-11 09:43:10.745541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.186 [2024-06-11 09:43:10.745550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.186 [2024-06-11 09:43:10.758874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.186 [2024-06-11 09:43:10.758895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.186 [2024-06-11 09:43:10.758903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.186 [2024-06-11 09:43:10.770844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.186 [2024-06-11 09:43:10.770864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.186 [2024-06-11 09:43:10.770872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.186 [2024-06-11 09:43:10.782464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.186 [2024-06-11 09:43:10.782484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.186 [2024-06-11 09:43:10.782493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.186 [2024-06-11 09:43:10.795410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.186 [2024-06-11 09:43:10.795431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.186 [2024-06-11 09:43:10.795441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.186 [2024-06-11 09:43:10.806504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.186 [2024-06-11 09:43:10.806525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.186 [2024-06-11 09:43:10.806533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.186 [2024-06-11 09:43:10.819118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.186 [2024-06-11 09:43:10.819138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.186 [2024-06-11 09:43:10.819147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.186 [2024-06-11 09:43:10.831250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.186 [2024-06-11 09:43:10.831271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.186 [2024-06-11 09:43:10.831283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.186 [2024-06-11 09:43:10.843776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.186 [2024-06-11 09:43:10.843797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.186 [2024-06-11 09:43:10.843805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.187 [2024-06-11 09:43:10.856467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.187 [2024-06-11 09:43:10.856489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.187 [2024-06-11 09:43:10.856497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.187 [2024-06-11 09:43:10.870138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.187 [2024-06-11 09:43:10.870160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.187 [2024-06-11 09:43:10.870169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.187 [2024-06-11 09:43:10.880982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.187 [2024-06-11 09:43:10.881003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.187 [2024-06-11 09:43:10.881011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.187 [2024-06-11 09:43:10.894198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.187 [2024-06-11 09:43:10.894219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.187 [2024-06-11 09:43:10.894228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.187 [2024-06-11 09:43:10.905688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.187 [2024-06-11 09:43:10.905709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.187 [2024-06-11 09:43:10.905718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.187 [2024-06-11 09:43:10.917874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.187 [2024-06-11 09:43:10.917895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.187 [2024-06-11 09:43:10.917903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.187 [2024-06-11 09:43:10.930930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.187 [2024-06-11 09:43:10.930950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.187 [2024-06-11 09:43:10.930959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.187 [2024-06-11 09:43:10.943131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.187 [2024-06-11 09:43:10.943159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.187 [2024-06-11 09:43:10.943168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.187 [2024-06-11 09:43:10.955839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.187 [2024-06-11 09:43:10.955860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.187 [2024-06-11 09:43:10.955868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.187 [2024-06-11 09:43:10.967519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.187 [2024-06-11 09:43:10.967540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.187 [2024-06-11 09:43:10.967548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.187 [2024-06-11 09:43:10.978897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.187 [2024-06-11 09:43:10.978917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.187 [2024-06-11 09:43:10.978926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.187 [2024-06-11 09:43:10.992023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.187 [2024-06-11 09:43:10.992043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.187 [2024-06-11 09:43:10.992051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.004890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.004911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.004919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.016788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.016809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.016817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.027798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.027819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.027828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.041240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.041261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.041273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.052730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.052751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.052759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.065841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.065861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.065870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.077655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.077675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.077684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.090449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.090470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.090478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.102438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.102458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.102467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.114298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.114325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.114335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.126816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.126837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.126846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.139854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.139875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.139883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.152386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.152410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.152419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.163714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.163735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.163744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.176343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.176364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.176372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.188989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.189009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.189019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.200934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.200955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.200963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.213415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.213435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.213444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.227056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.227078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.227086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.239072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.239093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.239101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.449 [2024-06-11 09:43:11.251158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.449 [2024-06-11 09:43:11.251178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.449 [2024-06-11 09:43:11.251186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.711 [2024-06-11 09:43:11.264191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.711 [2024-06-11 09:43:11.264213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.711 [2024-06-11 09:43:11.264221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.711 [2024-06-11 09:43:11.275352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.711 [2024-06-11 09:43:11.275372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.711 [2024-06-11 09:43:11.275381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.711 [2024-06-11 09:43:11.288792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.711 [2024-06-11 09:43:11.288813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.711 [2024-06-11 09:43:11.288821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.711 [2024-06-11 09:43:11.300268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.711 [2024-06-11 09:43:11.300289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.711 [2024-06-11 09:43:11.300297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.711 [2024-06-11 09:43:11.312933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.711 [2024-06-11 09:43:11.312954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.711 [2024-06-11 09:43:11.312962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.711 [2024-06-11 09:43:11.327506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.711 [2024-06-11 09:43:11.327528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.711 [2024-06-11 09:43:11.327536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.711 [2024-06-11 09:43:11.338560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.711 [2024-06-11 09:43:11.338581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.711 [2024-06-11 09:43:11.338589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.711 [2024-06-11 09:43:11.351622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.711 [2024-06-11 09:43:11.351643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.711 [2024-06-11 09:43:11.351652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.711 [2024-06-11 09:43:11.363634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.711 [2024-06-11 09:43:11.363654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.711 [2024-06-11 09:43:11.363667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.711 [2024-06-11 09:43:11.374960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.711 [2024-06-11 09:43:11.374980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.711 [2024-06-11 09:43:11.374988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.712 [2024-06-11 09:43:11.388166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.712 [2024-06-11 09:43:11.388187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.712 [2024-06-11 09:43:11.388195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.712 [2024-06-11 09:43:11.399480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.712 [2024-06-11 09:43:11.399500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.712 [2024-06-11 09:43:11.399508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.712 [2024-06-11 09:43:11.411934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.712 [2024-06-11 09:43:11.411954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.712 [2024-06-11 09:43:11.411963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.712 [2024-06-11 09:43:11.425395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.712 [2024-06-11 09:43:11.425415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.712 [2024-06-11 09:43:11.425424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.712 [2024-06-11 09:43:11.435808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.712 [2024-06-11 09:43:11.435829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.712 [2024-06-11 09:43:11.435838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.712 [2024-06-11 09:43:11.450249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.712 [2024-06-11 09:43:11.450270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.712 [2024-06-11 09:43:11.450279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.712 [2024-06-11 09:43:11.461887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.712 [2024-06-11 09:43:11.461907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.712 [2024-06-11 09:43:11.461915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.712 [2024-06-11 09:43:11.474605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.712 [2024-06-11 09:43:11.474629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.712 [2024-06-11 09:43:11.474637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.712 [2024-06-11 09:43:11.485895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.712 [2024-06-11 09:43:11.485916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.712 [2024-06-11 09:43:11.485925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.712 [2024-06-11 09:43:11.499140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.712 [2024-06-11 09:43:11.499160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.712 [2024-06-11 09:43:11.499169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.712 [2024-06-11 09:43:11.510013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.712 [2024-06-11 09:43:11.510035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.712 [2024-06-11 09:43:11.510043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.712 [2024-06-11 09:43:11.522800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.712 [2024-06-11 09:43:11.522822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.712 [2024-06-11 09:43:11.522830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.974 [2024-06-11 09:43:11.535959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.974 [2024-06-11 09:43:11.535979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.974 [2024-06-11 09:43:11.535988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.974 [2024-06-11 09:43:11.547261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ae4a0) 00:28:39.974 [2024-06-11 09:43:11.547282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.974 [2024-06-11 09:43:11.547291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.974 00:28:39.974 Latency(us) 00:28:39.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.974 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:39.974 nvme0n1 : 2.00 20439.25 79.84 0.00 0.00 6254.93 2867.20 17257.81 00:28:39.974 =================================================================================================================== 00:28:39.974 Total : 20439.25 79.84 0.00 0.00 6254.93 2867.20 17257.81 00:28:39.974 0 00:28:39.974 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:39.974 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:39.974 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:39.974 | .driver_specific 00:28:39.974 | .nvme_error 00:28:39.974 | .status_code 00:28:39.974 | .command_transient_transport_error' 00:28:39.974 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:39.974 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 160 > 0 )) 00:28:39.974 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1317428 00:28:39.974 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1317428 ']' 00:28:39.974 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1317428 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1317428 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1317428' 00:28:40.235 killing process with pid 1317428 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1317428 00:28:40.235 Received shutdown signal, test time was about 2.000000 seconds 00:28:40.235 00:28:40.235 Latency(us) 00:28:40.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.235 =================================================================================================================== 00:28:40.235 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1317428 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1318106 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1318106 /var/tmp/bperf.sock 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1318106 ']' 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:40.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:40.235 09:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.235 [2024-06-11 09:43:12.020736] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:28:40.235 [2024-06-11 09:43:12.020790] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1318106 ] 00:28:40.235 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:40.235 Zero copy mechanism will not be used. 00:28:40.235 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.500 [2024-06-11 09:43:12.077850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.501 [2024-06-11 09:43:12.141418] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.501 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:40.501 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:40.501 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:40.501 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:40.768 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:40.768 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.768 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.768 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.768 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.768 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:41.029 nvme0n1 00:28:41.029 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:41.029 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.029 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.029 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.029 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:41.029 09:43:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:41.290 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:41.290 Zero copy mechanism will not be used. 00:28:41.290 Running I/O for 2 seconds... 00:28:41.290 [2024-06-11 09:43:12.946660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.290 [2024-06-11 09:43:12.946696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.290 [2024-06-11 09:43:12.946708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.290 [2024-06-11 09:43:12.958675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.290 [2024-06-11 09:43:12.958701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.290 [2024-06-11 09:43:12.958711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.290 [2024-06-11 09:43:12.971119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.290 [2024-06-11 09:43:12.971142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.290 [2024-06-11 09:43:12.971157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.290 [2024-06-11 09:43:12.983766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.290 [2024-06-11 09:43:12.983788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.290 [2024-06-11 09:43:12.983797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.290 [2024-06-11 09:43:12.997365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.290 [2024-06-11 09:43:12.997387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.290 [2024-06-11 09:43:12.997395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.290 [2024-06-11 09:43:13.009934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.290 [2024-06-11 09:43:13.009954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.290 [2024-06-11 09:43:13.009963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.290 [2024-06-11 09:43:13.024800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.290 [2024-06-11 09:43:13.024821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.290 [2024-06-11 09:43:13.024830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.290 [2024-06-11 09:43:13.038682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.290 [2024-06-11 09:43:13.038704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.290 [2024-06-11 09:43:13.038713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.290 [2024-06-11 09:43:13.051990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.290 [2024-06-11 09:43:13.052012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.290 [2024-06-11 09:43:13.052021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.290 [2024-06-11 09:43:13.066739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.290 [2024-06-11 09:43:13.066761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.290 [2024-06-11 09:43:13.066769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.290 [2024-06-11 09:43:13.080189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.290 [2024-06-11 09:43:13.080211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.290 [2024-06-11 09:43:13.080219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.290 [2024-06-11 09:43:13.094777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.290 [2024-06-11 09:43:13.094802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.290 [2024-06-11 09:43:13.094811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.107809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.107830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.107839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.122425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.122446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.122455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.134436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.134457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.134465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.146962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.146982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.146991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.158414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.158436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.158445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.170766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.170787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.170796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.183144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.183165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.183174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.194606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.194627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.194636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.209233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.209254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.209262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.224796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.224817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.224825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.238031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.238052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.238060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.250654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.250676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.250684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.264727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.264748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.264756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.280217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.280238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.280246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.291617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.291638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.291647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.304707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.304728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.304737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.318550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.318572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.318585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.332038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.332060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.332069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.342063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.342084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.342092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.353861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.353883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.353892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.552 [2024-06-11 09:43:13.366146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.552 [2024-06-11 09:43:13.366167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.552 [2024-06-11 09:43:13.366175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.378340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.378362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.378371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.390333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.390355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.390363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.401568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.401589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.401598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.413879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.413900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.413909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.425504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.425529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.425537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.437566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.437587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.437596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.449503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.449525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.449533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.463954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.463976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.463984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.479609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.479631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.479639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.494839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.494861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.494869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.509570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.509592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.509600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.523071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.523093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.523101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.537978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.537999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.538007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.552464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.552485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.552494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.566293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.566320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.566330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.814 [2024-06-11 09:43:13.577972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.814 [2024-06-11 09:43:13.577993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.814 [2024-06-11 09:43:13.578002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:41.815 [2024-06-11 09:43:13.591014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.815 [2024-06-11 09:43:13.591036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.815 [2024-06-11 09:43:13.591044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:41.815 [2024-06-11 09:43:13.602119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.815 [2024-06-11 09:43:13.602141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.815 [2024-06-11 09:43:13.602149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:41.815 [2024-06-11 09:43:13.612902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.815 [2024-06-11 09:43:13.612925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.815 [2024-06-11 09:43:13.612934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:41.815 [2024-06-11 09:43:13.625255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:41.815 [2024-06-11 09:43:13.625276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:41.815 [2024-06-11 09:43:13.625285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.638662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.638684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.638692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.650761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.650782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.650796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.662360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.662381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.662389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.673506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.673528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.673536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.686495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.686517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.686525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.698983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.699004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.699013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.711060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.711081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.711089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.721801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.721822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.721830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.733846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.733868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.733876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.745284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.745306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.745320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.756536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.756558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.756567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.770492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.770514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.770522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.783556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.783578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.783586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.796629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.796650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.796658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.808550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.808571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.808579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.819323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.819345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.819353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.830343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.830364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.830373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.841798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.841820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.841828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.853682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.853703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.853716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.864710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.864731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.864739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.876265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.876286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.876295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.077 [2024-06-11 09:43:13.889265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.077 [2024-06-11 09:43:13.889286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.077 [2024-06-11 09:43:13.889294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.339 [2024-06-11 09:43:13.901075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.339 [2024-06-11 09:43:13.901097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.339 [2024-06-11 09:43:13.901105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.339 [2024-06-11 09:43:13.912962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.339 [2024-06-11 09:43:13.912983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.339 [2024-06-11 09:43:13.912991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.339 [2024-06-11 09:43:13.924183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.339 [2024-06-11 09:43:13.924204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.339 [2024-06-11 09:43:13.924213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.339 [2024-06-11 09:43:13.935053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.339 [2024-06-11 09:43:13.935074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.339 [2024-06-11 09:43:13.935082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.339 [2024-06-11 09:43:13.945618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.339 [2024-06-11 09:43:13.945639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.339 [2024-06-11 09:43:13.945647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.339 [2024-06-11 09:43:13.957161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.339 [2024-06-11 09:43:13.957186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.339 [2024-06-11 09:43:13.957195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.339 [2024-06-11 09:43:13.968027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.339 [2024-06-11 09:43:13.968048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.339 [2024-06-11 09:43:13.968056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.339 [2024-06-11 09:43:13.978887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.339 [2024-06-11 09:43:13.978909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.339 [2024-06-11 09:43:13.978917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.339 [2024-06-11 09:43:13.991765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.339 [2024-06-11 09:43:13.991787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.339 [2024-06-11 09:43:13.991796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.339 [2024-06-11 09:43:14.005395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.339 [2024-06-11 09:43:14.005418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.340 [2024-06-11 09:43:14.005427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.340 [2024-06-11 09:43:14.018153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.340 [2024-06-11 09:43:14.018175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.340 [2024-06-11 09:43:14.018183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.340 [2024-06-11 09:43:14.030553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.340 [2024-06-11 09:43:14.030575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.340 [2024-06-11 09:43:14.030583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.340 [2024-06-11 09:43:14.043186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.340 [2024-06-11 09:43:14.043208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.340 [2024-06-11 09:43:14.043216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.340 [2024-06-11 09:43:14.055553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.340 [2024-06-11 09:43:14.055576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.340 [2024-06-11 09:43:14.055584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.340 [2024-06-11 09:43:14.066529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.340 [2024-06-11 09:43:14.066551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.340 [2024-06-11 09:43:14.066561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.340 [2024-06-11 09:43:14.076530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.340 [2024-06-11 09:43:14.076551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.340 [2024-06-11 09:43:14.076560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.340 [2024-06-11 09:43:14.087117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.340 [2024-06-11 09:43:14.087140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.340 [2024-06-11 09:43:14.087150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.340 [2024-06-11 09:43:14.098326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.340 [2024-06-11 09:43:14.098347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.340 [2024-06-11 09:43:14.098355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.340 [2024-06-11 09:43:14.110074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.340 [2024-06-11 09:43:14.110095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.340 [2024-06-11 09:43:14.110103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.340 [2024-06-11 09:43:14.121746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.340 [2024-06-11 09:43:14.121767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.340 [2024-06-11 09:43:14.121775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.340 [2024-06-11 09:43:14.132972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.340 [2024-06-11 09:43:14.132994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.340 [2024-06-11 09:43:14.133002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.340 [2024-06-11 09:43:14.145262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.340 [2024-06-11 09:43:14.145284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.340 [2024-06-11 09:43:14.145292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.601 [2024-06-11 09:43:14.157469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.157490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.157502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.170228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.170249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.170258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.181902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.181925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.181934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.193110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.193131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.193140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.203891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.203913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.203922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.214490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.214512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.214520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.226460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.226481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.226490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.236131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.236152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.236161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.247703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.247724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.247732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.259920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.259945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.259953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.272089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.272111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.272119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.284659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.284681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.284690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.296768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.296789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.296797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.307850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.307872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.307880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.318471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.318493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.318501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.330282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.330304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.330312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.342488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.342510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.342519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.355942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.355963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.355972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.366996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.367018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.367026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.377672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.377693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.377702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.389041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.389062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.389071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.401323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.401346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.401356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.602 [2024-06-11 09:43:14.413176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.602 [2024-06-11 09:43:14.413197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.602 [2024-06-11 09:43:14.413206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.864 [2024-06-11 09:43:14.424688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.864 [2024-06-11 09:43:14.424710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.864 [2024-06-11 09:43:14.424719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.864 [2024-06-11 09:43:14.435561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.864 [2024-06-11 09:43:14.435582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.435590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.446601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.446624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.446634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.457908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.457929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.457941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.470516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.470538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.470546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.482030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.482051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.482059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.493254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.493276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.493285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.503456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.503478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.503486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.514089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.514111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.514119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.525532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.525554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.525562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.537786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.537807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.537816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.549872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.549893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.549901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.562275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.562300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.562308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.573928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.573950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.573958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.585503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.585525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.585533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.596568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.596590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.596598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.608960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.608982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.608990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.620549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.620570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.620578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.633907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.633929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.633937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.646163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.646185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.646194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.657715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.657737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.657746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:42.865 [2024-06-11 09:43:14.667651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:42.865 [2024-06-11 09:43:14.667673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.865 [2024-06-11 09:43:14.667681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.127 [2024-06-11 09:43:14.680219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.127 [2024-06-11 09:43:14.680241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.127 [2024-06-11 09:43:14.680249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.127 [2024-06-11 09:43:14.693152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.127 [2024-06-11 09:43:14.693174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.127 [2024-06-11 09:43:14.693183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.127 [2024-06-11 09:43:14.705191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.127 [2024-06-11 09:43:14.705213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.127 [2024-06-11 09:43:14.705221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.127 [2024-06-11 09:43:14.717709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.127 [2024-06-11 09:43:14.717732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.127 [2024-06-11 09:43:14.717741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.127 [2024-06-11 09:43:14.730711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.127 [2024-06-11 09:43:14.730734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.127 [2024-06-11 09:43:14.730742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.127 [2024-06-11 09:43:14.742966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.127 [2024-06-11 09:43:14.742988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.127 [2024-06-11 09:43:14.742996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.127 [2024-06-11 09:43:14.755513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.127 [2024-06-11 09:43:14.755535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.127 [2024-06-11 09:43:14.755544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.127 [2024-06-11 09:43:14.766546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.127 [2024-06-11 09:43:14.766568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.127 [2024-06-11 09:43:14.766580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.127 [2024-06-11 09:43:14.779509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.127 [2024-06-11 09:43:14.779530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.127 [2024-06-11 09:43:14.779538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.127 [2024-06-11 09:43:14.791112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.127 [2024-06-11 09:43:14.791133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.127 [2024-06-11 09:43:14.791142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.127 [2024-06-11 09:43:14.802729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.127 [2024-06-11 09:43:14.802752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.127 [2024-06-11 09:43:14.802762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.127 [2024-06-11 09:43:14.814017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.127 [2024-06-11 09:43:14.814039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.127 [2024-06-11 09:43:14.814047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.127 [2024-06-11 09:43:14.824550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.128 [2024-06-11 09:43:14.824571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.128 [2024-06-11 09:43:14.824579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.128 [2024-06-11 09:43:14.835631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.128 [2024-06-11 09:43:14.835653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.128 [2024-06-11 09:43:14.835662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.128 [2024-06-11 09:43:14.847055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.128 [2024-06-11 09:43:14.847076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.128 [2024-06-11 09:43:14.847085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.128 [2024-06-11 09:43:14.860529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.128 [2024-06-11 09:43:14.860550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.128 [2024-06-11 09:43:14.860559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.128 [2024-06-11 09:43:14.872227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.128 [2024-06-11 09:43:14.872249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.128 [2024-06-11 09:43:14.872257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.128 [2024-06-11 09:43:14.884066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.128 [2024-06-11 09:43:14.884088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.128 [2024-06-11 09:43:14.884096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.128 [2024-06-11 09:43:14.897196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.128 [2024-06-11 09:43:14.897218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.128 [2024-06-11 09:43:14.897227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:43.128 [2024-06-11 09:43:14.909471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.128 [2024-06-11 09:43:14.909493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.128 [2024-06-11 09:43:14.909501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:43.128 [2024-06-11 09:43:14.919331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.128 [2024-06-11 09:43:14.919353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.128 [2024-06-11 09:43:14.919361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:43.128 [2024-06-11 09:43:14.930671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9af400) 00:28:43.128 [2024-06-11 09:43:14.930693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:43.128 [2024-06-11 09:43:14.930702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:43.128 00:28:43.128 Latency(us) 00:28:43.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.128 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:43.128 nvme0n1 : 2.00 2551.40 318.93 0.00 0.00 6266.66 1522.35 16274.77 00:28:43.128 =================================================================================================================== 00:28:43.128 Total : 2551.40 318.93 0.00 0.00 6266.66 1522.35 16274.77 00:28:43.128 0 00:28:43.389 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:43.389 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:43.389 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:43.389 | .driver_specific 00:28:43.389 | .nvme_error 00:28:43.389 | .status_code 00:28:43.389 | .command_transient_transport_error' 00:28:43.389 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:43.389 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:28:43.389 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1318106 00:28:43.389 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1318106 ']' 00:28:43.389 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1318106 00:28:43.389 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:43.389 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:43.389 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1318106 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1318106' 00:28:43.650 killing process with pid 1318106 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1318106 00:28:43.650 Received shutdown signal, test time was about 2.000000 seconds 00:28:43.650 00:28:43.650 Latency(us) 00:28:43.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.650 =================================================================================================================== 00:28:43.650 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1318106 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1318747 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1318747 /var/tmp/bperf.sock 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1318747 ']' 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:43.650 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.650 [2024-06-11 09:43:15.403245] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:28:43.650 [2024-06-11 09:43:15.403301] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1318747 ] 00:28:43.650 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.650 [2024-06-11 09:43:15.460510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.911 [2024-06-11 09:43:15.524135] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.911 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:43.911 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:43.911 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:43.911 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.171 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:44.171 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.171 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.171 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.171 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.171 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.432 nvme0n1 00:28:44.432 09:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:44.432 09:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.432 09:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.432 09:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.432 09:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:44.432 09:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.693 Running I/O for 2 seconds... 00:28:44.693 [2024-06-11 09:43:16.277679] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f8618 00:28:44.693 [2024-06-11 09:43:16.278679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-11 09:43:16.278710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:44.693 [2024-06-11 09:43:16.289639] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f0350 00:28:44.693 [2024-06-11 09:43:16.290647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-11 09:43:16.290669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:44.693 [2024-06-11 09:43:16.301381] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e38d0 00:28:44.693 [2024-06-11 09:43:16.302363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-11 09:43:16.302382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:44.693 [2024-06-11 09:43:16.314131] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6020 00:28:44.693 [2024-06-11 09:43:16.315239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-11 09:43:16.315259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.693 [2024-06-11 09:43:16.325733] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e3060 00:28:44.693 [2024-06-11 09:43:16.326828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-11 09:43:16.326847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.693 [2024-06-11 09:43:16.337434] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e84c0 00:28:44.693 [2024-06-11 09:43:16.338531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-11 09:43:16.338549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.693 [2024-06-11 09:43:16.349148] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f46d0 00:28:44.694 [2024-06-11 09:43:16.350245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-11 09:43:16.350264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.694 [2024-06-11 09:43:16.360839] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ecc78 00:28:44.694 [2024-06-11 09:43:16.361937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-11 09:43:16.361956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.694 [2024-06-11 09:43:16.372547] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f35f0 00:28:44.694 [2024-06-11 09:43:16.373657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-11 09:43:16.373675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.694 [2024-06-11 09:43:16.384237] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6020 00:28:44.694 [2024-06-11 09:43:16.385338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-11 09:43:16.385357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.694 [2024-06-11 09:43:16.395937] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e3060 00:28:44.694 [2024-06-11 09:43:16.397025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-11 09:43:16.397044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.694 [2024-06-11 09:43:16.407604] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e84c0 00:28:44.694 [2024-06-11 09:43:16.408656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-11 09:43:16.408675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.694 [2024-06-11 09:43:16.419299] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f46d0 00:28:44.694 [2024-06-11 09:43:16.420393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-11 09:43:16.420415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.694 [2024-06-11 09:43:16.430991] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ecc78 00:28:44.694 [2024-06-11 09:43:16.432083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-11 09:43:16.432103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.694 [2024-06-11 09:43:16.442668] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f35f0 00:28:44.694 [2024-06-11 09:43:16.443756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-11 09:43:16.443775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.694 [2024-06-11 09:43:16.454364] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6020 00:28:44.694 [2024-06-11 09:43:16.455419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-11 09:43:16.455439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.694 [2024-06-11 09:43:16.466053] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e3060 00:28:44.694 [2024-06-11 09:43:16.467146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-11 09:43:16.467164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.694 [2024-06-11 09:43:16.477735] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e84c0 00:28:44.694 [2024-06-11 09:43:16.478809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-11 09:43:16.478828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.694 [2024-06-11 09:43:16.489421] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f46d0 00:28:44.694 [2024-06-11 09:43:16.490506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-11 09:43:16.490525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.694 [2024-06-11 09:43:16.501101] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ecc78 00:28:44.694 [2024-06-11 09:43:16.502190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-11 09:43:16.502209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.512819] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f35f0 00:28:44.956 [2024-06-11 09:43:16.513911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.513930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.524493] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6020 00:28:44.956 [2024-06-11 09:43:16.525555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.525574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.536201] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e3060 00:28:44.956 [2024-06-11 09:43:16.537308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.537330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.547895] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e84c0 00:28:44.956 [2024-06-11 09:43:16.549001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.549020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.559650] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f46d0 00:28:44.956 [2024-06-11 09:43:16.560711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.560729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.571327] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ecc78 00:28:44.956 [2024-06-11 09:43:16.572416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.572434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.583002] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f35f0 00:28:44.956 [2024-06-11 09:43:16.584092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.584112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.594693] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6020 00:28:44.956 [2024-06-11 09:43:16.595792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.595811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.606402] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e3060 00:28:44.956 [2024-06-11 09:43:16.607502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.607522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.618070] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e84c0 00:28:44.956 [2024-06-11 09:43:16.619169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.619187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.629867] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f46d0 00:28:44.956 [2024-06-11 09:43:16.630920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.630939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.641551] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ecc78 00:28:44.956 [2024-06-11 09:43:16.642645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.642665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.653230] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f35f0 00:28:44.956 [2024-06-11 09:43:16.654329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.654348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.664908] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6020 00:28:44.956 [2024-06-11 09:43:16.666003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.666022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.676609] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e3060 00:28:44.956 [2024-06-11 09:43:16.677660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.677679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.688300] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e84c0 00:28:44.956 [2024-06-11 09:43:16.689395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.956 [2024-06-11 09:43:16.689415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.956 [2024-06-11 09:43:16.699977] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f46d0 00:28:44.957 [2024-06-11 09:43:16.701069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.957 [2024-06-11 09:43:16.701088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.957 [2024-06-11 09:43:16.711653] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ecc78 00:28:44.957 [2024-06-11 09:43:16.712753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.957 [2024-06-11 09:43:16.712772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.957 [2024-06-11 09:43:16.723341] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f35f0 00:28:44.957 [2024-06-11 09:43:16.724430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.957 [2024-06-11 09:43:16.724452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.957 [2024-06-11 09:43:16.735034] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6020 00:28:44.957 [2024-06-11 09:43:16.736128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.957 [2024-06-11 09:43:16.736147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.957 [2024-06-11 09:43:16.746961] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e3060 00:28:44.957 [2024-06-11 09:43:16.748052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.957 [2024-06-11 09:43:16.748071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.957 [2024-06-11 09:43:16.758638] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e84c0 00:28:44.957 [2024-06-11 09:43:16.759707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.957 [2024-06-11 09:43:16.759726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.957 [2024-06-11 09:43:16.770333] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f46d0 00:28:45.218 [2024-06-11 09:43:16.771420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.771439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.782007] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ecc78 00:28:45.218 [2024-06-11 09:43:16.783101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.783120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.793711] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f35f0 00:28:45.218 [2024-06-11 09:43:16.794795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.794814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.805403] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6020 00:28:45.218 [2024-06-11 09:43:16.806497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.806515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.817096] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e3060 00:28:45.218 [2024-06-11 09:43:16.818197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.818217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.828783] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e84c0 00:28:45.218 [2024-06-11 09:43:16.829875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.829898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.840473] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f46d0 00:28:45.218 [2024-06-11 09:43:16.841547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.841567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.852133] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ecc78 00:28:45.218 [2024-06-11 09:43:16.853229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.853247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.863828] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f35f0 00:28:45.218 [2024-06-11 09:43:16.864917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.864936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.875491] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6020 00:28:45.218 [2024-06-11 09:43:16.876577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.876596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.887176] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e3060 00:28:45.218 [2024-06-11 09:43:16.888278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.888297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.898843] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e84c0 00:28:45.218 [2024-06-11 09:43:16.899944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.899962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.910523] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f46d0 00:28:45.218 [2024-06-11 09:43:16.911611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.911630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.922194] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ecc78 00:28:45.218 [2024-06-11 09:43:16.923288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.923307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.933897] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f35f0 00:28:45.218 [2024-06-11 09:43:16.934990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.935010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.945553] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6020 00:28:45.218 [2024-06-11 09:43:16.946646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.946665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.957232] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e3060 00:28:45.218 [2024-06-11 09:43:16.958329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.958348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.968924] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e84c0 00:28:45.218 [2024-06-11 09:43:16.970010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.970029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.980619] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f46d0 00:28:45.218 [2024-06-11 09:43:16.981711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.981730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:16.992294] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ecc78 00:28:45.218 [2024-06-11 09:43:16.993392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:16.993410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:17.003976] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f35f0 00:28:45.218 [2024-06-11 09:43:17.005067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:17.005086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:17.015661] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6020 00:28:45.218 [2024-06-11 09:43:17.016753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.218 [2024-06-11 09:43:17.016772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.218 [2024-06-11 09:43:17.027342] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e3060 00:28:45.219 [2024-06-11 09:43:17.028430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.219 [2024-06-11 09:43:17.028449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.039555] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e01f8 00:28:45.480 [2024-06-11 09:43:17.040613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.040632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.051500] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e5ec8 00:28:45.480 [2024-06-11 09:43:17.052865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.052884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.063214] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e4de8 00:28:45.480 [2024-06-11 09:43:17.064576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.064595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.074932] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e3d08 00:28:45.480 [2024-06-11 09:43:17.076292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.076311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.086656] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e2c28 00:28:45.480 [2024-06-11 09:43:17.088017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.088036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.098396] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190eb328 00:28:45.480 [2024-06-11 09:43:17.099749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.099768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.110102] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190fb8b8 00:28:45.480 [2024-06-11 09:43:17.111449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.111468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.121812] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190fdeb0 00:28:45.480 [2024-06-11 09:43:17.123174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.123193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.133546] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190fcdd0 00:28:45.480 [2024-06-11 09:43:17.134939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.134962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.145320] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190dfdc0 00:28:45.480 [2024-06-11 09:43:17.146676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.146695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.158561] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e0a68 00:28:45.480 [2024-06-11 09:43:17.160573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.160592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.169477] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e4578 00:28:45.480 [2024-06-11 09:43:17.171000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.171019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.181098] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190efae0 00:28:45.480 [2024-06-11 09:43:17.182610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.182629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.192815] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e84c0 00:28:45.480 [2024-06-11 09:43:17.194336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.194355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.204527] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f3e60 00:28:45.480 [2024-06-11 09:43:17.206014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.206033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.216279] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ddc00 00:28:45.480 [2024-06-11 09:43:17.217794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.217814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.480 [2024-06-11 09:43:17.228000] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190dece0 00:28:45.480 [2024-06-11 09:43:17.229522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.480 [2024-06-11 09:43:17.229541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.481 [2024-06-11 09:43:17.239712] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f8e88 00:28:45.481 [2024-06-11 09:43:17.241236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.481 [2024-06-11 09:43:17.241255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.481 [2024-06-11 09:43:17.251436] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f7da8 00:28:45.481 [2024-06-11 09:43:17.252955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.481 [2024-06-11 09:43:17.252975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.481 [2024-06-11 09:43:17.263184] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6cc8 00:28:45.481 [2024-06-11 09:43:17.264674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.481 [2024-06-11 09:43:17.264693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.481 [2024-06-11 09:43:17.274885] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190fb048 00:28:45.481 [2024-06-11 09:43:17.276406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.481 [2024-06-11 09:43:17.276425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.481 [2024-06-11 09:43:17.286622] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f9f68 00:28:45.481 [2024-06-11 09:43:17.288135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.481 [2024-06-11 09:43:17.288155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.742 [2024-06-11 09:43:17.298313] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ea248 00:28:45.742 [2024-06-11 09:43:17.299801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.742 [2024-06-11 09:43:17.299820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.742 [2024-06-11 09:43:17.310014] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6020 00:28:45.742 [2024-06-11 09:43:17.311530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.742 [2024-06-11 09:43:17.311550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.742 [2024-06-11 09:43:17.321722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f4f40 00:28:45.742 [2024-06-11 09:43:17.323244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.742 [2024-06-11 09:43:17.323263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.742 [2024-06-11 09:43:17.333461] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e73e0 00:28:45.742 [2024-06-11 09:43:17.334942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.742 [2024-06-11 09:43:17.334960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.742 [2024-06-11 09:43:17.345180] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e6300 00:28:45.742 [2024-06-11 09:43:17.346699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.742 [2024-06-11 09:43:17.346718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.742 [2024-06-11 09:43:17.356889] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e5220 00:28:45.742 [2024-06-11 09:43:17.358410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.742 [2024-06-11 09:43:17.358429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.742 [2024-06-11 09:43:17.368609] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190eff18 00:28:45.742 [2024-06-11 09:43:17.370129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.742 [2024-06-11 09:43:17.370150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.742 [2024-06-11 09:43:17.380329] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e88f8 00:28:45.742 [2024-06-11 09:43:17.381845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.742 [2024-06-11 09:43:17.381864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.742 [2024-06-11 09:43:17.392057] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f4298 00:28:45.742 [2024-06-11 09:43:17.393586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.742 [2024-06-11 09:43:17.393604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.742 [2024-06-11 09:43:17.403782] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f31b8 00:28:45.742 [2024-06-11 09:43:17.405307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.742 [2024-06-11 09:43:17.405329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.742 [2024-06-11 09:43:17.415519] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190de8a8 00:28:45.742 [2024-06-11 09:43:17.417035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.742 [2024-06-11 09:43:17.417054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.742 [2024-06-11 09:43:17.427235] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e9e10 00:28:45.742 [2024-06-11 09:43:17.428760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.742 [2024-06-11 09:43:17.428779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.742 [2024-06-11 09:43:17.438922] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f81e0 00:28:45.742 [2024-06-11 09:43:17.440437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.742 [2024-06-11 09:43:17.440459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.743 [2024-06-11 09:43:17.450618] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f7100 00:28:45.743 [2024-06-11 09:43:17.452120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.743 [2024-06-11 09:43:17.452139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.743 [2024-06-11 09:43:17.462336] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ed920 00:28:45.743 [2024-06-11 09:43:17.463856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.743 [2024-06-11 09:43:17.463874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.743 [2024-06-11 09:43:17.474023] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190fa3a0 00:28:45.743 [2024-06-11 09:43:17.475543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.743 [2024-06-11 09:43:17.475561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.743 [2024-06-11 09:43:17.485765] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f92c0 00:28:45.743 [2024-06-11 09:43:17.487291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.743 [2024-06-11 09:43:17.487310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.743 [2024-06-11 09:43:17.497483] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190eaef0 00:28:45.743 [2024-06-11 09:43:17.498993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.743 [2024-06-11 09:43:17.499012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.743 [2024-06-11 09:43:17.509203] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f5378 00:28:45.743 [2024-06-11 09:43:17.510723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.743 [2024-06-11 09:43:17.510743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.743 [2024-06-11 09:43:17.520914] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e7818 00:28:45.743 [2024-06-11 09:43:17.522431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.743 [2024-06-11 09:43:17.522450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.743 [2024-06-11 09:43:17.532652] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e6738 00:28:45.743 [2024-06-11 09:43:17.534166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.743 [2024-06-11 09:43:17.534184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.743 [2024-06-11 09:43:17.544360] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e5658 00:28:45.743 [2024-06-11 09:43:17.545873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.743 [2024-06-11 09:43:17.545891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.743 [2024-06-11 09:43:17.556088] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e4578 00:28:46.004 [2024-06-11 09:43:17.557622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.557642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.567801] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190efae0 00:28:46.004 [2024-06-11 09:43:17.569318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.569337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.579505] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e84c0 00:28:46.004 [2024-06-11 09:43:17.581020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.581039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.591214] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f3e60 00:28:46.004 [2024-06-11 09:43:17.592725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.592745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.602906] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ddc00 00:28:46.004 [2024-06-11 09:43:17.604426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.604444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.614620] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190dece0 00:28:46.004 [2024-06-11 09:43:17.616103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.616122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.626408] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f8e88 00:28:46.004 [2024-06-11 09:43:17.627927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.627947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.638148] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f7da8 00:28:46.004 [2024-06-11 09:43:17.639630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.639649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.649821] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6cc8 00:28:46.004 [2024-06-11 09:43:17.651337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.651356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.661531] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190fb048 00:28:46.004 [2024-06-11 09:43:17.663045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.663064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.673239] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f9f68 00:28:46.004 [2024-06-11 09:43:17.674759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.674778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.684934] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ea248 00:28:46.004 [2024-06-11 09:43:17.686442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.686461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.696651] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6020 00:28:46.004 [2024-06-11 09:43:17.698169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.698188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.708363] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f4f40 00:28:46.004 [2024-06-11 09:43:17.709887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.709906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.720129] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e73e0 00:28:46.004 [2024-06-11 09:43:17.721650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.721669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.731834] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e6300 00:28:46.004 [2024-06-11 09:43:17.733364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.004 [2024-06-11 09:43:17.733383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.004 [2024-06-11 09:43:17.743708] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e5220 00:28:46.005 [2024-06-11 09:43:17.745190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.005 [2024-06-11 09:43:17.745215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.005 [2024-06-11 09:43:17.755408] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190eff18 00:28:46.005 [2024-06-11 09:43:17.756919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.005 [2024-06-11 09:43:17.756938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.005 [2024-06-11 09:43:17.767110] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e88f8 00:28:46.005 [2024-06-11 09:43:17.768625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.005 [2024-06-11 09:43:17.768643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.005 [2024-06-11 09:43:17.778825] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f4298 00:28:46.005 [2024-06-11 09:43:17.780338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.005 [2024-06-11 09:43:17.780357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.005 [2024-06-11 09:43:17.790544] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f31b8 00:28:46.005 [2024-06-11 09:43:17.792067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.005 [2024-06-11 09:43:17.792086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.005 [2024-06-11 09:43:17.802261] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190de8a8 00:28:46.005 [2024-06-11 09:43:17.803777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.005 [2024-06-11 09:43:17.803796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.005 [2024-06-11 09:43:17.813984] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e9e10 00:28:46.005 [2024-06-11 09:43:17.815490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.005 [2024-06-11 09:43:17.815509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.825700] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f81e0 00:28:46.266 [2024-06-11 09:43:17.827214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.827232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.837430] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f7100 00:28:46.266 [2024-06-11 09:43:17.838961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.838980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.849136] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ed920 00:28:46.266 [2024-06-11 09:43:17.850658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.850676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.860824] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190fa3a0 00:28:46.266 [2024-06-11 09:43:17.862339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.862358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.872537] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f92c0 00:28:46.266 [2024-06-11 09:43:17.874046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.874065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.884199] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190eaef0 00:28:46.266 [2024-06-11 09:43:17.885727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.885746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.895897] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f5378 00:28:46.266 [2024-06-11 09:43:17.897411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.897430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.907634] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e7818 00:28:46.266 [2024-06-11 09:43:17.909112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.909131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.919336] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e6738 00:28:46.266 [2024-06-11 09:43:17.920853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.920872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.931072] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e5658 00:28:46.266 [2024-06-11 09:43:17.932596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.932615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.942776] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e4578 00:28:46.266 [2024-06-11 09:43:17.944267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.944286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.954474] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190efae0 00:28:46.266 [2024-06-11 09:43:17.955997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.956016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.966171] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e84c0 00:28:46.266 [2024-06-11 09:43:17.967689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.967708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.977861] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f3e60 00:28:46.266 [2024-06-11 09:43:17.979378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.979397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:17.989577] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ddc00 00:28:46.266 [2024-06-11 09:43:17.991092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:17.991111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:18.001297] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190dece0 00:28:46.266 [2024-06-11 09:43:18.002780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.266 [2024-06-11 09:43:18.002798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.266 [2024-06-11 09:43:18.013001] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f8e88 00:28:46.267 [2024-06-11 09:43:18.014527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.267 [2024-06-11 09:43:18.014546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.267 [2024-06-11 09:43:18.024693] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f7da8 00:28:46.267 [2024-06-11 09:43:18.026182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.267 [2024-06-11 09:43:18.026201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.267 [2024-06-11 09:43:18.036424] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6cc8 00:28:46.267 [2024-06-11 09:43:18.037939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.267 [2024-06-11 09:43:18.037957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.267 [2024-06-11 09:43:18.048142] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190fb048 00:28:46.267 [2024-06-11 09:43:18.049632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.267 [2024-06-11 09:43:18.049654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.267 [2024-06-11 09:43:18.059825] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f9f68 00:28:46.267 [2024-06-11 09:43:18.061340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.267 [2024-06-11 09:43:18.061359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.267 [2024-06-11 09:43:18.071544] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ea248 00:28:46.267 [2024-06-11 09:43:18.073052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.267 [2024-06-11 09:43:18.073071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.083228] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f6020 00:28:46.528 [2024-06-11 09:43:18.084750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.084770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.094917] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f4f40 00:28:46.528 [2024-06-11 09:43:18.096399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.096417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.106621] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e73e0 00:28:46.528 [2024-06-11 09:43:18.108117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.108136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.118345] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e6300 00:28:46.528 [2024-06-11 09:43:18.119873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.119892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.130054] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e5220 00:28:46.528 [2024-06-11 09:43:18.131549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.131568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.141766] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190eff18 00:28:46.528 [2024-06-11 09:43:18.143283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.143302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.153479] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e88f8 00:28:46.528 [2024-06-11 09:43:18.154997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.155016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.165194] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f4298 00:28:46.528 [2024-06-11 09:43:18.166712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.166731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.176891] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f31b8 00:28:46.528 [2024-06-11 09:43:18.178407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.178425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.188598] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190de8a8 00:28:46.528 [2024-06-11 09:43:18.190117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.190136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.200295] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190e9e10 00:28:46.528 [2024-06-11 09:43:18.201814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.201833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.211981] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f81e0 00:28:46.528 [2024-06-11 09:43:18.213506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.213525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.223708] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f7100 00:28:46.528 [2024-06-11 09:43:18.225224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.225244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.235441] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190ed920 00:28:46.528 [2024-06-11 09:43:18.236954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.236972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.247156] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190fa3a0 00:28:46.528 [2024-06-11 09:43:18.248680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.248700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.258855] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190f92c0 00:28:46.528 [2024-06-11 09:43:18.260388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.528 [2024-06-11 09:43:18.260408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.528 [2024-06-11 09:43:18.270589] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18c10) with pdu=0x2000190eaef0 00:28:46.529 [2024-06-11 09:43:18.272105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.529 [2024-06-11 09:43:18.272123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.529 00:28:46.529 Latency(us) 00:28:46.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.529 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:46.529 nvme0n1 : 2.01 21797.33 85.15 0.00 0.00 5863.16 2689.71 13489.49 00:28:46.529 =================================================================================================================== 00:28:46.529 Total : 21797.33 85.15 0.00 0.00 5863.16 2689.71 13489.49 00:28:46.529 0 00:28:46.529 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:46.529 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:46.529 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:46.529 | .driver_specific 00:28:46.529 | .nvme_error 00:28:46.529 | .status_code 00:28:46.529 | .command_transient_transport_error' 00:28:46.529 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:46.789 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 171 > 0 )) 00:28:46.789 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1318747 00:28:46.789 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1318747 ']' 00:28:46.789 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1318747 00:28:46.789 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:46.789 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:46.789 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1318747 00:28:46.789 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:46.789 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:46.789 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1318747' 00:28:46.789 killing process with pid 1318747 00:28:46.789 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1318747 00:28:46.789 Received shutdown signal, test time was about 2.000000 seconds 00:28:46.789 00:28:46.789 Latency(us) 00:28:46.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.789 =================================================================================================================== 00:28:46.789 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.789 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1318747 00:28:47.050 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:47.050 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:47.050 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:47.050 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:47.050 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:47.050 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1319324 00:28:47.050 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1319324 /var/tmp/bperf.sock 00:28:47.050 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1319324 ']' 00:28:47.050 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:47.050 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:47.050 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:47.050 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:47.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:47.050 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:47.050 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.050 [2024-06-11 09:43:18.740664] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:28:47.050 [2024-06-11 09:43:18.740719] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1319324 ] 00:28:47.050 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:47.050 Zero copy mechanism will not be used. 00:28:47.050 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.050 [2024-06-11 09:43:18.798124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.050 [2024-06-11 09:43:18.861867] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.311 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:47.311 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:47.311 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:47.311 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:47.572 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:47.572 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:47.572 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.572 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:47.572 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.572 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.832 nvme0n1 00:28:47.832 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:47.832 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:47.832 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.832 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:47.832 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:47.832 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:48.093 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:48.093 Zero copy mechanism will not be used. 00:28:48.093 Running I/O for 2 seconds... 00:28:48.093 [2024-06-11 09:43:19.718327] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.718670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.718702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.732257] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.732651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.732674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.743753] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.744140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.744162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.754051] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.754497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.754518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.763887] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.763990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.764008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.774217] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.774595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.774616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.784061] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.784459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.784480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.794644] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.795012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.795037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.804261] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.804651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.804672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.812978] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.813066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.813085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.823136] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.823527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.823548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.833292] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.833677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.833698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.842590] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.842956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.842976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.853732] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.854119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.854139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.863198] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.863579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.863600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.873221] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.873615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.873635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.882610] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.882879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.093 [2024-06-11 09:43:19.882899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.093 [2024-06-11 09:43:19.892092] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.093 [2024-06-11 09:43:19.892490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.094 [2024-06-11 09:43:19.892510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.094 [2024-06-11 09:43:19.901212] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.094 [2024-06-11 09:43:19.901479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.094 [2024-06-11 09:43:19.901498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.355 [2024-06-11 09:43:19.910009] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.355 [2024-06-11 09:43:19.910271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.355 [2024-06-11 09:43:19.910290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.355 [2024-06-11 09:43:19.918585] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.355 [2024-06-11 09:43:19.918856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.355 [2024-06-11 09:43:19.918876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.355 [2024-06-11 09:43:19.928141] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.355 [2024-06-11 09:43:19.928512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.355 [2024-06-11 09:43:19.928533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.355 [2024-06-11 09:43:19.935055] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.355 [2024-06-11 09:43:19.935431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.355 [2024-06-11 09:43:19.935452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.355 [2024-06-11 09:43:19.942165] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.355 [2024-06-11 09:43:19.942428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.355 [2024-06-11 09:43:19.942448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.355 [2024-06-11 09:43:19.948713] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.355 [2024-06-11 09:43:19.949086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.355 [2024-06-11 09:43:19.949107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.355 [2024-06-11 09:43:19.958261] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.355 [2024-06-11 09:43:19.958632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.355 [2024-06-11 09:43:19.958653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.355 [2024-06-11 09:43:19.967341] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.355 [2024-06-11 09:43:19.967698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.355 [2024-06-11 09:43:19.967718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.355 [2024-06-11 09:43:19.975888] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.355 [2024-06-11 09:43:19.976261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.355 [2024-06-11 09:43:19.976281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.355 [2024-06-11 09:43:19.984623] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.355 [2024-06-11 09:43:19.984881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:19.984901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:19.993885] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:19.994241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:19.994262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.003080] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.003363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.003385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.012153] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.012540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.012561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.018924] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.019292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.019312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.025504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.025883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.025908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.031669] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.032034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.032054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.036805] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.037060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.037080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.042765] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.043035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.043055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.048938] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.049288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.049309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.054929] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.055305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.055330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.060617] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.060962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.060983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.068336] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.068719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.068739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.074645] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.074729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.074747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.082298] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.082686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.082707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.089662] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.090015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.090035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.098659] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.098954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.098974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.108525] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.108875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.108895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.118592] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.118965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.118986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.128891] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.129268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.129288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.137722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.138089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.138109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.144880] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.145243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.145263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.151583] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.151925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.151951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.158998] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.159252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.159272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.356 [2024-06-11 09:43:20.165771] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.356 [2024-06-11 09:43:20.166127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.356 [2024-06-11 09:43:20.166148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.618 [2024-06-11 09:43:20.173916] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.618 [2024-06-11 09:43:20.174281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.618 [2024-06-11 09:43:20.174301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.618 [2024-06-11 09:43:20.181767] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.618 [2024-06-11 09:43:20.182129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.618 [2024-06-11 09:43:20.182149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.618 [2024-06-11 09:43:20.189986] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.618 [2024-06-11 09:43:20.190352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.618 [2024-06-11 09:43:20.190372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.618 [2024-06-11 09:43:20.196608] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.618 [2024-06-11 09:43:20.196864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.618 [2024-06-11 09:43:20.196884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.618 [2024-06-11 09:43:20.202188] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.618 [2024-06-11 09:43:20.202449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.618 [2024-06-11 09:43:20.202468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.618 [2024-06-11 09:43:20.208241] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.618 [2024-06-11 09:43:20.208607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.618 [2024-06-11 09:43:20.208627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.618 [2024-06-11 09:43:20.214097] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.618 [2024-06-11 09:43:20.214363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.618 [2024-06-11 09:43:20.214382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.618 [2024-06-11 09:43:20.219821] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.618 [2024-06-11 09:43:20.220167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.618 [2024-06-11 09:43:20.220187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.618 [2024-06-11 09:43:20.225413] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.618 [2024-06-11 09:43:20.225669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.225688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.231518] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.231773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.231793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.237304] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.237679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.237699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.243383] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.243752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.243772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.251884] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.252229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.252249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.260633] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.260992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.261012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.270199] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.270549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.270570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.280243] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.280649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.280670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.290408] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.290713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.290732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.300838] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.301029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.301047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.310637] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.311025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.311045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.319020] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.319093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.319111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.326814] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.327085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.327105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.334112] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.334452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.334480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.341910] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.342177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.342198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.348009] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.348361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.348390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.353827] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.354205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.354225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.359587] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.359946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.359966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.364807] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.365061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.365080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.369983] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.370335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.370355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.375863] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.376207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.376227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.381999] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.382371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.382392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.387763] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.388015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.388034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.392788] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.393042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.393062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.398568] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.398824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.398843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.405978] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.406348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.406369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.619 [2024-06-11 09:43:20.412221] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.619 [2024-06-11 09:43:20.412575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.619 [2024-06-11 09:43:20.412596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.620 [2024-06-11 09:43:20.418277] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.620 [2024-06-11 09:43:20.418648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.620 [2024-06-11 09:43:20.418668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.620 [2024-06-11 09:43:20.424406] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.620 [2024-06-11 09:43:20.424764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.620 [2024-06-11 09:43:20.424784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.620 [2024-06-11 09:43:20.430201] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.620 [2024-06-11 09:43:20.430461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.620 [2024-06-11 09:43:20.430480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.436203] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.881 [2024-06-11 09:43:20.436601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-06-11 09:43:20.436621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.441995] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.881 [2024-06-11 09:43:20.442249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-06-11 09:43:20.442269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.447628] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.881 [2024-06-11 09:43:20.447883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-06-11 09:43:20.447904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.452760] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.881 [2024-06-11 09:43:20.453123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-06-11 09:43:20.453143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.458638] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.881 [2024-06-11 09:43:20.458986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-06-11 09:43:20.459006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.463882] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.881 [2024-06-11 09:43:20.464134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-06-11 09:43:20.464160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.469409] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.881 [2024-06-11 09:43:20.469766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-06-11 09:43:20.469786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.474725] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.881 [2024-06-11 09:43:20.474980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-06-11 09:43:20.475000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.480325] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.881 [2024-06-11 09:43:20.480724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-06-11 09:43:20.480744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.487344] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.881 [2024-06-11 09:43:20.487694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-06-11 09:43:20.487714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.493784] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.881 [2024-06-11 09:43:20.494132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-06-11 09:43:20.494152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.501058] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.881 [2024-06-11 09:43:20.501430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-06-11 09:43:20.501454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.510014] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.881 [2024-06-11 09:43:20.510395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-06-11 09:43:20.510415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.518105] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.881 [2024-06-11 09:43:20.518470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.881 [2024-06-11 09:43:20.518490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.881 [2024-06-11 09:43:20.526358] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.526752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.526773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.882 [2024-06-11 09:43:20.535804] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.536166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.536186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.882 [2024-06-11 09:43:20.547280] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.547641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.547662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.882 [2024-06-11 09:43:20.559622] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.559986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.560007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.882 [2024-06-11 09:43:20.572006] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.572373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.572394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.882 [2024-06-11 09:43:20.583270] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.583360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.583378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.882 [2024-06-11 09:43:20.595507] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.595885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.595905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.882 [2024-06-11 09:43:20.606533] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.606906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.606926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.882 [2024-06-11 09:43:20.617414] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.617783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.617803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.882 [2024-06-11 09:43:20.629788] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.630177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.630197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.882 [2024-06-11 09:43:20.642008] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.642384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.642404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.882 [2024-06-11 09:43:20.653973] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.654067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.654085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.882 [2024-06-11 09:43:20.665781] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.666159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.666179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.882 [2024-06-11 09:43:20.677388] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.677783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.677804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.882 [2024-06-11 09:43:20.688958] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:48.882 [2024-06-11 09:43:20.689346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.882 [2024-06-11 09:43:20.689369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.143 [2024-06-11 09:43:20.701968] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.143 [2024-06-11 09:43:20.702268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-06-11 09:43:20.702288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.143 [2024-06-11 09:43:20.714782] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.143 [2024-06-11 09:43:20.715160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-06-11 09:43:20.715181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.143 [2024-06-11 09:43:20.727922] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.143 [2024-06-11 09:43:20.728330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-06-11 09:43:20.728350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.143 [2024-06-11 09:43:20.739688] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.143 [2024-06-11 09:43:20.739958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-06-11 09:43:20.739977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.143 [2024-06-11 09:43:20.750853] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.143 [2024-06-11 09:43:20.750992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-06-11 09:43:20.751010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.143 [2024-06-11 09:43:20.762247] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.143 [2024-06-11 09:43:20.762657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-06-11 09:43:20.762677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.143 [2024-06-11 09:43:20.773530] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.143 [2024-06-11 09:43:20.773885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-06-11 09:43:20.773905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.143 [2024-06-11 09:43:20.783111] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.143 [2024-06-11 09:43:20.783493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-06-11 09:43:20.783514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.143 [2024-06-11 09:43:20.790022] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.143 [2024-06-11 09:43:20.790382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.143 [2024-06-11 09:43:20.790402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.143 [2024-06-11 09:43:20.799558] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.143 [2024-06-11 09:43:20.799918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.799938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.808270] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.808532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.808551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.815355] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.815613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.815632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.825552] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.825824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.825844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.834496] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.834863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.834883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.845737] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.846010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.846030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.853584] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.853958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.853978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.862903] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.863298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.863323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.871092] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.871470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.871490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.880176] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.880539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.880559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.889729] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.890118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.890137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.898757] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.898840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.898858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.908740] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.909115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.909135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.917059] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.917323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.917342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.925512] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.925880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.925900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.934873] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.935257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.935277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.943178] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.943550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.943574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.949598] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.950005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.950025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.144 [2024-06-11 09:43:20.957042] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.144 [2024-06-11 09:43:20.957169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.144 [2024-06-11 09:43:20.957187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.405 [2024-06-11 09:43:20.967450] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.405 [2024-06-11 09:43:20.967807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-06-11 09:43:20.967827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.405 [2024-06-11 09:43:20.977474] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.405 [2024-06-11 09:43:20.977816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-06-11 09:43:20.977835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.405 [2024-06-11 09:43:20.988888] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.405 [2024-06-11 09:43:20.988992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-06-11 09:43:20.989010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.405 [2024-06-11 09:43:20.999880] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.405 [2024-06-11 09:43:21.000256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-06-11 09:43:21.000276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.405 [2024-06-11 09:43:21.010512] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.405 [2024-06-11 09:43:21.010909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-06-11 09:43:21.010929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.405 [2024-06-11 09:43:21.022635] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.405 [2024-06-11 09:43:21.023006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-06-11 09:43:21.023026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.405 [2024-06-11 09:43:21.033574] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.405 [2024-06-11 09:43:21.033962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-06-11 09:43:21.033982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.405 [2024-06-11 09:43:21.044720] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.405 [2024-06-11 09:43:21.045110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-06-11 09:43:21.045130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.405 [2024-06-11 09:43:21.055003] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.405 [2024-06-11 09:43:21.055396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-06-11 09:43:21.055416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.405 [2024-06-11 09:43:21.066363] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.405 [2024-06-11 09:43:21.066757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-06-11 09:43:21.066777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.405 [2024-06-11 09:43:21.077175] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.405 [2024-06-11 09:43:21.077556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-06-11 09:43:21.077576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.405 [2024-06-11 09:43:21.086670] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.405 [2024-06-11 09:43:21.087040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.405 [2024-06-11 09:43:21.087060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.405 [2024-06-11 09:43:21.099072] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.405 [2024-06-11 09:43:21.099479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.406 [2024-06-11 09:43:21.099499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.406 [2024-06-11 09:43:21.109959] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.406 [2024-06-11 09:43:21.110336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.406 [2024-06-11 09:43:21.110356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.406 [2024-06-11 09:43:21.121054] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.406 [2024-06-11 09:43:21.121333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.406 [2024-06-11 09:43:21.121352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.406 [2024-06-11 09:43:21.132550] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.406 [2024-06-11 09:43:21.132920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.406 [2024-06-11 09:43:21.132940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.406 [2024-06-11 09:43:21.144565] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.406 [2024-06-11 09:43:21.144948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.406 [2024-06-11 09:43:21.144968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.406 [2024-06-11 09:43:21.155762] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.406 [2024-06-11 09:43:21.155877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.406 [2024-06-11 09:43:21.155895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.406 [2024-06-11 09:43:21.166041] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.406 [2024-06-11 09:43:21.166406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.406 [2024-06-11 09:43:21.166433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.406 [2024-06-11 09:43:21.176902] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.406 [2024-06-11 09:43:21.177503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.406 [2024-06-11 09:43:21.177523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.406 [2024-06-11 09:43:21.189068] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.406 [2024-06-11 09:43:21.189461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.406 [2024-06-11 09:43:21.189481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.406 [2024-06-11 09:43:21.200790] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.406 [2024-06-11 09:43:21.201105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.406 [2024-06-11 09:43:21.201125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.406 [2024-06-11 09:43:21.210697] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.406 [2024-06-11 09:43:21.211069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.406 [2024-06-11 09:43:21.211089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.717 [2024-06-11 09:43:21.221082] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.717 [2024-06-11 09:43:21.221175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.717 [2024-06-11 09:43:21.221197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.717 [2024-06-11 09:43:21.232900] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.717 [2024-06-11 09:43:21.233342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.717 [2024-06-11 09:43:21.233361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.717 [2024-06-11 09:43:21.242338] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.717 [2024-06-11 09:43:21.242712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.717 [2024-06-11 09:43:21.242732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.717 [2024-06-11 09:43:21.250967] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.717 [2024-06-11 09:43:21.251354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.717 [2024-06-11 09:43:21.251374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.717 [2024-06-11 09:43:21.262952] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.717 [2024-06-11 09:43:21.263223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.717 [2024-06-11 09:43:21.263242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.717 [2024-06-11 09:43:21.272996] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.717 [2024-06-11 09:43:21.273272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.717 [2024-06-11 09:43:21.273292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.717 [2024-06-11 09:43:21.280017] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.717 [2024-06-11 09:43:21.280276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.717 [2024-06-11 09:43:21.280296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.717 [2024-06-11 09:43:21.288691] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.717 [2024-06-11 09:43:21.288951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.288970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.296387] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.296644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.296664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.303086] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.303354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.303373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.309496] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.309845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.309865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.314604] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.314860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.314879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.320146] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.320417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.320436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.327205] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.327477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.327505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.332336] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.332591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.332609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.337326] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.337581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.337601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.341843] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.342096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.342115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.349845] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.350129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.350148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.357320] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.357585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.357605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.367615] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.367981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.368001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.376312] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.376721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.376741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.384487] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.384759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.384779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.392097] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.392374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.392394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.398634] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.398892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.398911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.404938] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.405193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.405213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.411392] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.411747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.411767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.418513] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.418767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.418790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.424023] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.424276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.424296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.428949] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.429202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.429221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.436574] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.436829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.436848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.441348] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.441603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.441623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.446691] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.446959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.446979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.452372] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.718 [2024-06-11 09:43:21.452463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.718 [2024-06-11 09:43:21.452481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.718 [2024-06-11 09:43:21.460850] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.719 [2024-06-11 09:43:21.461104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.719 [2024-06-11 09:43:21.461125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.719 [2024-06-11 09:43:21.468327] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.719 [2024-06-11 09:43:21.468581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.719 [2024-06-11 09:43:21.468600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.719 [2024-06-11 09:43:21.473972] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.719 [2024-06-11 09:43:21.474227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.719 [2024-06-11 09:43:21.474246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.719 [2024-06-11 09:43:21.480533] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.719 [2024-06-11 09:43:21.480791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.719 [2024-06-11 09:43:21.480811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.719 [2024-06-11 09:43:21.486444] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.719 [2024-06-11 09:43:21.486700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.719 [2024-06-11 09:43:21.486720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.719 [2024-06-11 09:43:21.492258] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.719 [2024-06-11 09:43:21.492513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.719 [2024-06-11 09:43:21.492533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.719 [2024-06-11 09:43:21.500459] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.719 [2024-06-11 09:43:21.500726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.719 [2024-06-11 09:43:21.500745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.719 [2024-06-11 09:43:21.505921] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.719 [2024-06-11 09:43:21.506176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.719 [2024-06-11 09:43:21.506195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-06-11 09:43:21.515475] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.991 [2024-06-11 09:43:21.515845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-06-11 09:43:21.515865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.991 [2024-06-11 09:43:21.525259] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.991 [2024-06-11 09:43:21.525641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-06-11 09:43:21.525661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.991 [2024-06-11 09:43:21.533209] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.991 [2024-06-11 09:43:21.533307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-06-11 09:43:21.533333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.991 [2024-06-11 09:43:21.543738] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.991 [2024-06-11 09:43:21.544100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-06-11 09:43:21.544120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-06-11 09:43:21.552450] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.991 [2024-06-11 09:43:21.552963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-06-11 09:43:21.552983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.991 [2024-06-11 09:43:21.561828] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.991 [2024-06-11 09:43:21.562201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-06-11 09:43:21.562221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.991 [2024-06-11 09:43:21.569847] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.991 [2024-06-11 09:43:21.570104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-06-11 09:43:21.570124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.991 [2024-06-11 09:43:21.576060] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.991 [2024-06-11 09:43:21.576321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-06-11 09:43:21.576342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-06-11 09:43:21.581042] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.991 [2024-06-11 09:43:21.581297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-06-11 09:43:21.581322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.991 [2024-06-11 09:43:21.587124] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.991 [2024-06-11 09:43:21.587385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-06-11 09:43:21.587404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.991 [2024-06-11 09:43:21.594331] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.991 [2024-06-11 09:43:21.594598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-06-11 09:43:21.594618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.991 [2024-06-11 09:43:21.602818] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.991 [2024-06-11 09:43:21.603074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-06-11 09:43:21.603094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-06-11 09:43:21.608053] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.991 [2024-06-11 09:43:21.608308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.992 [2024-06-11 09:43:21.608334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.992 [2024-06-11 09:43:21.613603] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.992 [2024-06-11 09:43:21.613957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.992 [2024-06-11 09:43:21.613976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.992 [2024-06-11 09:43:21.623405] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.992 [2024-06-11 09:43:21.623647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.992 [2024-06-11 09:43:21.623666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.992 [2024-06-11 09:43:21.632879] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.992 [2024-06-11 09:43:21.633146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.992 [2024-06-11 09:43:21.633166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.992 [2024-06-11 09:43:21.642119] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.992 [2024-06-11 09:43:21.642502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.992 [2024-06-11 09:43:21.642522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.992 [2024-06-11 09:43:21.652408] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.992 [2024-06-11 09:43:21.652788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.992 [2024-06-11 09:43:21.652808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.992 [2024-06-11 09:43:21.662935] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.992 [2024-06-11 09:43:21.663299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.992 [2024-06-11 09:43:21.663326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.992 [2024-06-11 09:43:21.671247] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.992 [2024-06-11 09:43:21.671750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.992 [2024-06-11 09:43:21.671770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.992 [2024-06-11 09:43:21.679517] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.992 [2024-06-11 09:43:21.679774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.992 [2024-06-11 09:43:21.679793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:49.992 [2024-06-11 09:43:21.686488] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.992 [2024-06-11 09:43:21.686745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.992 [2024-06-11 09:43:21.686765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:49.992 [2024-06-11 09:43:21.693808] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.992 [2024-06-11 09:43:21.694076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.992 [2024-06-11 09:43:21.694094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:49.992 [2024-06-11 09:43:21.700306] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d18f50) with pdu=0x2000190fef90 00:28:49.992 [2024-06-11 09:43:21.700460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.992 [2024-06-11 09:43:21.700479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.992 00:28:49.992 Latency(us) 00:28:49.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.992 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:49.992 nvme0n1 : 2.00 3656.53 457.07 0.00 0.00 4369.25 2157.23 17257.81 00:28:49.992 =================================================================================================================== 00:28:49.992 Total : 3656.53 457.07 0.00 0.00 4369.25 2157.23 17257.81 00:28:49.992 0 00:28:49.992 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:49.992 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:49.992 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:49.992 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:49.992 | .driver_specific 00:28:49.992 | .nvme_error 00:28:49.992 | .status_code 00:28:49.992 | .command_transient_transport_error' 00:28:50.254 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:28:50.254 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1319324 00:28:50.254 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1319324 ']' 00:28:50.254 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1319324 00:28:50.254 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:50.254 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:50.254 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1319324 00:28:50.254 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:50.254 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:50.254 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1319324' 00:28:50.254 killing process with pid 1319324 00:28:50.254 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1319324 00:28:50.254 Received shutdown signal, test time was about 2.000000 seconds 00:28:50.254 00:28:50.254 Latency(us) 00:28:50.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.254 =================================================================================================================== 00:28:50.254 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.254 09:43:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1319324 00:28:50.514 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1317087 00:28:50.514 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1317087 ']' 00:28:50.514 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1317087 00:28:50.514 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:50.514 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:50.514 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1317087 00:28:50.514 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:50.514 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:50.514 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1317087' 00:28:50.514 killing process with pid 1317087 00:28:50.514 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1317087 00:28:50.514 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1317087 00:28:50.514 00:28:50.514 real 0m14.756s 00:28:50.514 user 0m29.156s 00:28:50.514 sys 0m3.207s 00:28:50.514 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:50.514 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:50.514 ************************************ 00:28:50.514 END TEST nvmf_digest_error 00:28:50.514 ************************************ 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:50.774 rmmod nvme_tcp 00:28:50.774 rmmod nvme_fabrics 00:28:50.774 rmmod nvme_keyring 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1317087 ']' 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1317087 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 1317087 ']' 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 1317087 00:28:50.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1317087) - No such process 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 1317087 is not found' 00:28:50.774 Process with pid 1317087 is not found 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.774 09:43:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.319 09:43:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:53.319 00:28:53.319 real 0m38.864s 00:28:53.319 user 1m0.119s 00:28:53.319 sys 0m11.774s 00:28:53.319 09:43:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:53.319 09:43:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:53.319 ************************************ 00:28:53.319 END TEST nvmf_digest 00:28:53.319 ************************************ 00:28:53.319 09:43:24 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:53.319 09:43:24 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:53.319 09:43:24 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:53.319 09:43:24 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:53.319 09:43:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:53.319 09:43:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:53.319 09:43:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:53.319 ************************************ 00:28:53.319 START TEST nvmf_bdevperf 00:28:53.319 ************************************ 00:28:53.319 09:43:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:53.319 * Looking for test storage... 00:28:53.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:53.319 09:43:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.319 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:53.319 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.319 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.319 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.319 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.319 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.319 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.319 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.319 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.319 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.319 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:53.320 09:43:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.908 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.908 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:59.908 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:59.908 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:59.909 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:59.909 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:59.909 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:59.909 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:59.909 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.170 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.170 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.170 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:00.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:29:00.170 00:29:00.171 --- 10.0.0.2 ping statistics --- 00:29:00.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.171 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:29:00.171 00:29:00.171 --- 10.0.0.1 ping statistics --- 00:29:00.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.171 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1324154 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1324154 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1324154 ']' 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:00.171 09:43:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.171 [2024-06-11 09:43:31.930721] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:29:00.171 [2024-06-11 09:43:31.930772] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.171 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.431 [2024-06-11 09:43:31.998417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:00.431 [2024-06-11 09:43:32.064746] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.431 [2024-06-11 09:43:32.064783] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.431 [2024-06-11 09:43:32.064791] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.431 [2024-06-11 09:43:32.064798] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.431 [2024-06-11 09:43:32.064806] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.431 [2024-06-11 09:43:32.064914] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.431 [2024-06-11 09:43:32.065068] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.431 [2024-06-11 09:43:32.065069] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.431 [2024-06-11 09:43:32.202756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.431 Malloc0 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.431 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.691 [2024-06-11 09:43:32.270748] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.691 { 00:29:00.691 "params": { 00:29:00.691 "name": "Nvme$subsystem", 00:29:00.691 "trtype": "$TEST_TRANSPORT", 00:29:00.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.691 "adrfam": "ipv4", 00:29:00.691 "trsvcid": "$NVMF_PORT", 00:29:00.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.691 "hdgst": ${hdgst:-false}, 00:29:00.691 "ddgst": ${ddgst:-false} 00:29:00.691 }, 00:29:00.691 "method": "bdev_nvme_attach_controller" 00:29:00.691 } 00:29:00.691 EOF 00:29:00.691 )") 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:00.691 09:43:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:00.691 "params": { 00:29:00.691 "name": "Nvme1", 00:29:00.691 "trtype": "tcp", 00:29:00.691 "traddr": "10.0.0.2", 00:29:00.691 "adrfam": "ipv4", 00:29:00.691 "trsvcid": "4420", 00:29:00.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:00.691 "hdgst": false, 00:29:00.691 "ddgst": false 00:29:00.691 }, 00:29:00.691 "method": "bdev_nvme_attach_controller" 00:29:00.691 }' 00:29:00.691 [2024-06-11 09:43:32.323641] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:29:00.691 [2024-06-11 09:43:32.323688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324177 ] 00:29:00.691 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.691 [2024-06-11 09:43:32.397872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.691 [2024-06-11 09:43:32.462275] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.952 Running I/O for 1 seconds... 00:29:02.334 00:29:02.334 Latency(us) 00:29:02.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.334 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:02.334 Verification LBA range: start 0x0 length 0x4000 00:29:02.334 Nvme1n1 : 1.01 8944.79 34.94 0.00 0.00 14249.30 3017.39 14527.15 00:29:02.334 =================================================================================================================== 00:29:02.334 Total : 8944.79 34.94 0.00 0.00 14249.30 3017.39 14527.15 00:29:02.334 09:43:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1324520 00:29:02.334 09:43:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:02.334 09:43:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:02.334 09:43:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:02.334 09:43:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:02.334 09:43:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:02.334 09:43:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:02.334 09:43:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:02.334 { 00:29:02.334 "params": { 00:29:02.334 "name": "Nvme$subsystem", 00:29:02.334 "trtype": "$TEST_TRANSPORT", 00:29:02.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:02.334 "adrfam": "ipv4", 00:29:02.334 "trsvcid": "$NVMF_PORT", 00:29:02.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:02.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:02.334 "hdgst": ${hdgst:-false}, 00:29:02.334 "ddgst": ${ddgst:-false} 00:29:02.334 }, 00:29:02.334 "method": "bdev_nvme_attach_controller" 00:29:02.334 } 00:29:02.334 EOF 00:29:02.334 )") 00:29:02.334 09:43:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:02.334 09:43:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:02.334 09:43:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:02.334 09:43:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:02.334 "params": { 00:29:02.334 "name": "Nvme1", 00:29:02.334 "trtype": "tcp", 00:29:02.334 "traddr": "10.0.0.2", 00:29:02.334 "adrfam": "ipv4", 00:29:02.334 "trsvcid": "4420", 00:29:02.334 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:02.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:02.334 "hdgst": false, 00:29:02.334 "ddgst": false 00:29:02.334 }, 00:29:02.334 "method": "bdev_nvme_attach_controller" 00:29:02.334 }' 00:29:02.334 [2024-06-11 09:43:33.957538] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:29:02.334 [2024-06-11 09:43:33.957593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324520 ] 00:29:02.334 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.334 [2024-06-11 09:43:34.033403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.334 [2024-06-11 09:43:34.096157] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.594 Running I/O for 15 seconds... 00:29:05.143 09:43:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1324154 00:29:05.143 09:43:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:05.143 [2024-06-11 09:43:36.926215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.143 [2024-06-11 09:43:36.926578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.143 [2024-06-11 09:43:36.926587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.144 [2024-06-11 09:43:36.926858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.144 [2024-06-11 09:43:36.926874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.926990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.926999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.144 [2024-06-11 09:43:36.927249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.144 [2024-06-11 09:43:36.927258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.145 [2024-06-11 09:43:36.927266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.145 [2024-06-11 09:43:36.927282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.145 [2024-06-11 09:43:36.927298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.145 [2024-06-11 09:43:36.927317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.145 [2024-06-11 09:43:36.927908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.145 [2024-06-11 09:43:36.927917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.927924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.927934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.927941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.927950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.927957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.927967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.927974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.927985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.927992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.146 [2024-06-11 09:43:36.928122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.146 [2024-06-11 09:43:36.928138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.146 [2024-06-11 09:43:36.928153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.146 [2024-06-11 09:43:36.928169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.146 [2024-06-11 09:43:36.928420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f5340 is same with the state(5) to be set 00:29:05.146 [2024-06-11 09:43:36.928438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:05.146 [2024-06-11 09:43:36.928444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:05.146 [2024-06-11 09:43:36.928450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49408 len:8 PRP1 0x0 PRP2 0x0 00:29:05.146 [2024-06-11 09:43:36.928458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.146 [2024-06-11 09:43:36.928496] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24f5340 was disconnected and freed. reset controller. 00:29:05.146 [2024-06-11 09:43:36.932090] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.146 [2024-06-11 09:43:36.932136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.146 [2024-06-11 09:43:36.932971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-06-11 09:43:36.932988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.146 [2024-06-11 09:43:36.932996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.146 [2024-06-11 09:43:36.933213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.146 [2024-06-11 09:43:36.933436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.146 [2024-06-11 09:43:36.933445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.146 [2024-06-11 09:43:36.933453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.146 [2024-06-11 09:43:36.936946] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.146 [2024-06-11 09:43:36.946205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.146 [2024-06-11 09:43:36.946751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.146 [2024-06-11 09:43:36.946767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.146 [2024-06-11 09:43:36.946775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.146 [2024-06-11 09:43:36.946992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.147 [2024-06-11 09:43:36.947208] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.147 [2024-06-11 09:43:36.947215] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.147 [2024-06-11 09:43:36.947223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.147 [2024-06-11 09:43:36.950713] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.409 [2024-06-11 09:43:36.959975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.409 [2024-06-11 09:43:36.960695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-06-11 09:43:36.960735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.409 [2024-06-11 09:43:36.960746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.409 [2024-06-11 09:43:36.960984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.409 [2024-06-11 09:43:36.961204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.409 [2024-06-11 09:43:36.961213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.409 [2024-06-11 09:43:36.961220] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.409 [2024-06-11 09:43:36.964738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.409 [2024-06-11 09:43:36.973818] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.409 [2024-06-11 09:43:36.974468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-06-11 09:43:36.974508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.409 [2024-06-11 09:43:36.974520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.409 [2024-06-11 09:43:36.974761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.409 [2024-06-11 09:43:36.974982] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.409 [2024-06-11 09:43:36.974992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.409 [2024-06-11 09:43:36.975001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.409 [2024-06-11 09:43:36.978507] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.409 [2024-06-11 09:43:36.987571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.409 [2024-06-11 09:43:36.988270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-06-11 09:43:36.988310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.409 [2024-06-11 09:43:36.988330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.409 [2024-06-11 09:43:36.988569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.409 [2024-06-11 09:43:36.988789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.409 [2024-06-11 09:43:36.988798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.409 [2024-06-11 09:43:36.988806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.409 [2024-06-11 09:43:36.992307] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.409 [2024-06-11 09:43:37.001373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.409 [2024-06-11 09:43:37.001971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-06-11 09:43:37.001990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.409 [2024-06-11 09:43:37.001998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.409 [2024-06-11 09:43:37.002221] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.410 [2024-06-11 09:43:37.002444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.410 [2024-06-11 09:43:37.002456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.410 [2024-06-11 09:43:37.002464] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.410 [2024-06-11 09:43:37.005956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.410 [2024-06-11 09:43:37.015216] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.410 [2024-06-11 09:43:37.015879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.410 [2024-06-11 09:43:37.015921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.410 [2024-06-11 09:43:37.015932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.410 [2024-06-11 09:43:37.016171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.410 [2024-06-11 09:43:37.016399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.410 [2024-06-11 09:43:37.016409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.410 [2024-06-11 09:43:37.016416] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.410 [2024-06-11 09:43:37.019915] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.410 [2024-06-11 09:43:37.028977] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.410 [2024-06-11 09:43:37.029675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.410 [2024-06-11 09:43:37.029720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.410 [2024-06-11 09:43:37.029731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.410 [2024-06-11 09:43:37.029973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.410 [2024-06-11 09:43:37.030194] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.410 [2024-06-11 09:43:37.030203] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.410 [2024-06-11 09:43:37.030210] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.410 [2024-06-11 09:43:37.033717] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.410 [2024-06-11 09:43:37.042779] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.410 [2024-06-11 09:43:37.043407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.410 [2024-06-11 09:43:37.043431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.410 [2024-06-11 09:43:37.043439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.410 [2024-06-11 09:43:37.043657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.410 [2024-06-11 09:43:37.043873] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.410 [2024-06-11 09:43:37.043881] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.410 [2024-06-11 09:43:37.043894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.410 [2024-06-11 09:43:37.047402] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.410 [2024-06-11 09:43:37.056682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.410 [2024-06-11 09:43:37.057383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.410 [2024-06-11 09:43:37.057434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.410 [2024-06-11 09:43:37.057447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.410 [2024-06-11 09:43:37.057694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.410 [2024-06-11 09:43:37.057915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.410 [2024-06-11 09:43:37.057925] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.410 [2024-06-11 09:43:37.057933] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.410 [2024-06-11 09:43:37.061455] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.410 [2024-06-11 09:43:37.070538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.410 [2024-06-11 09:43:37.071261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.410 [2024-06-11 09:43:37.071324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.410 [2024-06-11 09:43:37.071339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.410 [2024-06-11 09:43:37.071586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.410 [2024-06-11 09:43:37.071808] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.410 [2024-06-11 09:43:37.071816] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.410 [2024-06-11 09:43:37.071824] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.410 [2024-06-11 09:43:37.075332] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.410 [2024-06-11 09:43:37.084415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.410 [2024-06-11 09:43:37.085185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.410 [2024-06-11 09:43:37.085243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.410 [2024-06-11 09:43:37.085255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.410 [2024-06-11 09:43:37.085518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.410 [2024-06-11 09:43:37.085742] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.410 [2024-06-11 09:43:37.085751] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.410 [2024-06-11 09:43:37.085758] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.410 [2024-06-11 09:43:37.089264] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.410 [2024-06-11 09:43:37.098342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.410 [2024-06-11 09:43:37.099108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.410 [2024-06-11 09:43:37.099177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.410 [2024-06-11 09:43:37.099190] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.410 [2024-06-11 09:43:37.099457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.410 [2024-06-11 09:43:37.099683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.410 [2024-06-11 09:43:37.099692] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.410 [2024-06-11 09:43:37.099700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.410 [2024-06-11 09:43:37.103212] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.410 [2024-06-11 09:43:37.112087] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.410 [2024-06-11 09:43:37.112799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.410 [2024-06-11 09:43:37.112860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.410 [2024-06-11 09:43:37.112872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.410 [2024-06-11 09:43:37.113124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.410 [2024-06-11 09:43:37.113362] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.410 [2024-06-11 09:43:37.113372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.410 [2024-06-11 09:43:37.113380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.410 [2024-06-11 09:43:37.116895] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.410 [2024-06-11 09:43:37.125965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.410 [2024-06-11 09:43:37.126694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.410 [2024-06-11 09:43:37.126755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.410 [2024-06-11 09:43:37.126767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.410 [2024-06-11 09:43:37.127019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.410 [2024-06-11 09:43:37.127242] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.410 [2024-06-11 09:43:37.127251] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.410 [2024-06-11 09:43:37.127260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.410 [2024-06-11 09:43:37.130788] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.410 [2024-06-11 09:43:37.139865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.410 [2024-06-11 09:43:37.140638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.410 [2024-06-11 09:43:37.140700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.410 [2024-06-11 09:43:37.140712] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.410 [2024-06-11 09:43:37.140965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.410 [2024-06-11 09:43:37.141196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.410 [2024-06-11 09:43:37.141207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.411 [2024-06-11 09:43:37.141214] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.411 [2024-06-11 09:43:37.144748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.411 [2024-06-11 09:43:37.153616] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.411 [2024-06-11 09:43:37.154345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.411 [2024-06-11 09:43:37.154407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.411 [2024-06-11 09:43:37.154420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.411 [2024-06-11 09:43:37.154672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.411 [2024-06-11 09:43:37.154896] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.411 [2024-06-11 09:43:37.154905] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.411 [2024-06-11 09:43:37.154912] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.411 [2024-06-11 09:43:37.158438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.411 [2024-06-11 09:43:37.167604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.411 [2024-06-11 09:43:37.168262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.411 [2024-06-11 09:43:37.168288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.411 [2024-06-11 09:43:37.168297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.411 [2024-06-11 09:43:37.168528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.411 [2024-06-11 09:43:37.168747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.411 [2024-06-11 09:43:37.168756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.411 [2024-06-11 09:43:37.168763] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.411 [2024-06-11 09:43:37.172268] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.411 [2024-06-11 09:43:37.181545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.411 [2024-06-11 09:43:37.182197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.411 [2024-06-11 09:43:37.182219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.411 [2024-06-11 09:43:37.182227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.411 [2024-06-11 09:43:37.182454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.411 [2024-06-11 09:43:37.182672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.411 [2024-06-11 09:43:37.182681] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.411 [2024-06-11 09:43:37.182688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.411 [2024-06-11 09:43:37.186208] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.411 [2024-06-11 09:43:37.195301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.411 [2024-06-11 09:43:37.195987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.411 [2024-06-11 09:43:37.196009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.411 [2024-06-11 09:43:37.196017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.411 [2024-06-11 09:43:37.196235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.411 [2024-06-11 09:43:37.196461] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.411 [2024-06-11 09:43:37.196471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.411 [2024-06-11 09:43:37.196479] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.411 [2024-06-11 09:43:37.200004] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.411 [2024-06-11 09:43:37.209085] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.411 [2024-06-11 09:43:37.209617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.411 [2024-06-11 09:43:37.209639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.411 [2024-06-11 09:43:37.209647] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.411 [2024-06-11 09:43:37.209865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.411 [2024-06-11 09:43:37.210082] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.411 [2024-06-11 09:43:37.210092] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.411 [2024-06-11 09:43:37.210100] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.411 [2024-06-11 09:43:37.213608] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.411 [2024-06-11 09:43:37.222897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.674 [2024-06-11 09:43:37.223675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.674 [2024-06-11 09:43:37.223738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.674 [2024-06-11 09:43:37.223751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.674 [2024-06-11 09:43:37.224005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.674 [2024-06-11 09:43:37.224229] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.674 [2024-06-11 09:43:37.224238] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.674 [2024-06-11 09:43:37.224246] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.674 [2024-06-11 09:43:37.227777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.674 [2024-06-11 09:43:37.236663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.674 [2024-06-11 09:43:37.237433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.674 [2024-06-11 09:43:37.237494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.674 [2024-06-11 09:43:37.237515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.674 [2024-06-11 09:43:37.237768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.674 [2024-06-11 09:43:37.237991] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.674 [2024-06-11 09:43:37.238000] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.674 [2024-06-11 09:43:37.238008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.674 [2024-06-11 09:43:37.241533] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.674 [2024-06-11 09:43:37.250606] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.674 [2024-06-11 09:43:37.251372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.674 [2024-06-11 09:43:37.251434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.674 [2024-06-11 09:43:37.251447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.674 [2024-06-11 09:43:37.251700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.674 [2024-06-11 09:43:37.251924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.674 [2024-06-11 09:43:37.251933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.674 [2024-06-11 09:43:37.251941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.674 [2024-06-11 09:43:37.255475] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.674 [2024-06-11 09:43:37.264568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.674 [2024-06-11 09:43:37.265298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.674 [2024-06-11 09:43:37.265369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.674 [2024-06-11 09:43:37.265382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.674 [2024-06-11 09:43:37.265635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.674 [2024-06-11 09:43:37.265858] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.674 [2024-06-11 09:43:37.265867] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.674 [2024-06-11 09:43:37.265875] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.674 [2024-06-11 09:43:37.269390] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.674 [2024-06-11 09:43:37.278463] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.674 [2024-06-11 09:43:37.279183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.674 [2024-06-11 09:43:37.279245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.674 [2024-06-11 09:43:37.279258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.674 [2024-06-11 09:43:37.279525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.674 [2024-06-11 09:43:37.279751] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.674 [2024-06-11 09:43:37.279766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.674 [2024-06-11 09:43:37.279775] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.674 [2024-06-11 09:43:37.283283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.674 [2024-06-11 09:43:37.292363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.674 [2024-06-11 09:43:37.293152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.674 [2024-06-11 09:43:37.293213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.674 [2024-06-11 09:43:37.293226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.674 [2024-06-11 09:43:37.293495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.674 [2024-06-11 09:43:37.293720] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.674 [2024-06-11 09:43:37.293730] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.674 [2024-06-11 09:43:37.293737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.674 [2024-06-11 09:43:37.297373] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.674 [2024-06-11 09:43:37.306277] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.674 [2024-06-11 09:43:37.307057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.674 [2024-06-11 09:43:37.307119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.674 [2024-06-11 09:43:37.307133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.674 [2024-06-11 09:43:37.307402] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.674 [2024-06-11 09:43:37.307627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.674 [2024-06-11 09:43:37.307636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.674 [2024-06-11 09:43:37.307644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.674 [2024-06-11 09:43:37.311152] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.674 [2024-06-11 09:43:37.320019] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.674 [2024-06-11 09:43:37.320759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.674 [2024-06-11 09:43:37.320821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.674 [2024-06-11 09:43:37.320833] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.674 [2024-06-11 09:43:37.321086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.674 [2024-06-11 09:43:37.321310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.674 [2024-06-11 09:43:37.321331] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.674 [2024-06-11 09:43:37.321339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.675 [2024-06-11 09:43:37.324855] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.675 [2024-06-11 09:43:37.333943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.675 [2024-06-11 09:43:37.334669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.675 [2024-06-11 09:43:37.334730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.675 [2024-06-11 09:43:37.334743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.675 [2024-06-11 09:43:37.334996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.675 [2024-06-11 09:43:37.335219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.675 [2024-06-11 09:43:37.335228] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.675 [2024-06-11 09:43:37.335236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.675 [2024-06-11 09:43:37.338768] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.675 [2024-06-11 09:43:37.347842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.675 [2024-06-11 09:43:37.348400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.675 [2024-06-11 09:43:37.348429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.675 [2024-06-11 09:43:37.348438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.675 [2024-06-11 09:43:37.348659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.675 [2024-06-11 09:43:37.348877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.675 [2024-06-11 09:43:37.348886] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.675 [2024-06-11 09:43:37.348894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.675 [2024-06-11 09:43:37.352400] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.675 [2024-06-11 09:43:37.361680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.675 [2024-06-11 09:43:37.362392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.675 [2024-06-11 09:43:37.362453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.675 [2024-06-11 09:43:37.362466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.675 [2024-06-11 09:43:37.362719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.675 [2024-06-11 09:43:37.362943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.675 [2024-06-11 09:43:37.362952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.675 [2024-06-11 09:43:37.362960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.675 [2024-06-11 09:43:37.366488] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.675 [2024-06-11 09:43:37.375559] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.675 [2024-06-11 09:43:37.376276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.675 [2024-06-11 09:43:37.376348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.675 [2024-06-11 09:43:37.376361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.675 [2024-06-11 09:43:37.376620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.675 [2024-06-11 09:43:37.376844] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.675 [2024-06-11 09:43:37.376853] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.675 [2024-06-11 09:43:37.376861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.675 [2024-06-11 09:43:37.380379] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.675 [2024-06-11 09:43:37.389444] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.675 [2024-06-11 09:43:37.390226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.675 [2024-06-11 09:43:37.390288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.675 [2024-06-11 09:43:37.390300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.675 [2024-06-11 09:43:37.390566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.675 [2024-06-11 09:43:37.390790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.675 [2024-06-11 09:43:37.390799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.675 [2024-06-11 09:43:37.390807] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.675 [2024-06-11 09:43:37.394321] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.675 [2024-06-11 09:43:37.403184] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.675 [2024-06-11 09:43:37.403953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.675 [2024-06-11 09:43:37.404015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.675 [2024-06-11 09:43:37.404028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.675 [2024-06-11 09:43:37.404282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.675 [2024-06-11 09:43:37.404518] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.675 [2024-06-11 09:43:37.404528] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.675 [2024-06-11 09:43:37.404536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.675 [2024-06-11 09:43:37.408062] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.675 [2024-06-11 09:43:37.416927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.675 [2024-06-11 09:43:37.417716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.675 [2024-06-11 09:43:37.417774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.675 [2024-06-11 09:43:37.417786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.675 [2024-06-11 09:43:37.418035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.675 [2024-06-11 09:43:37.418259] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.675 [2024-06-11 09:43:37.418268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.675 [2024-06-11 09:43:37.418288] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.675 [2024-06-11 09:43:37.421811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.675 [2024-06-11 09:43:37.430673] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.675 [2024-06-11 09:43:37.431354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.675 [2024-06-11 09:43:37.431416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.675 [2024-06-11 09:43:37.431429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.675 [2024-06-11 09:43:37.431681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.675 [2024-06-11 09:43:37.431905] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.675 [2024-06-11 09:43:37.431914] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.675 [2024-06-11 09:43:37.431924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.675 [2024-06-11 09:43:37.435469] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.675 [2024-06-11 09:43:37.444567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.675 [2024-06-11 09:43:37.445366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.675 [2024-06-11 09:43:37.445428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.675 [2024-06-11 09:43:37.445443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.675 [2024-06-11 09:43:37.445697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.675 [2024-06-11 09:43:37.445921] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.675 [2024-06-11 09:43:37.445929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.675 [2024-06-11 09:43:37.445938] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.675 [2024-06-11 09:43:37.449460] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.675 [2024-06-11 09:43:37.458329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.675 [2024-06-11 09:43:37.459075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.675 [2024-06-11 09:43:37.459135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.675 [2024-06-11 09:43:37.459148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.675 [2024-06-11 09:43:37.459415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.675 [2024-06-11 09:43:37.459640] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.675 [2024-06-11 09:43:37.459649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.675 [2024-06-11 09:43:37.459657] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.675 [2024-06-11 09:43:37.463194] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.676 [2024-06-11 09:43:37.472069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.676 [2024-06-11 09:43:37.472571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.676 [2024-06-11 09:43:37.472604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.676 [2024-06-11 09:43:37.472613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.676 [2024-06-11 09:43:37.472841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.676 [2024-06-11 09:43:37.473061] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.676 [2024-06-11 09:43:37.473071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.676 [2024-06-11 09:43:37.473078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.676 [2024-06-11 09:43:37.476598] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.676 [2024-06-11 09:43:37.485893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.676 [2024-06-11 09:43:37.486613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.676 [2024-06-11 09:43:37.486675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.676 [2024-06-11 09:43:37.486687] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.676 [2024-06-11 09:43:37.486940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.676 [2024-06-11 09:43:37.487164] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.676 [2024-06-11 09:43:37.487173] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.676 [2024-06-11 09:43:37.487182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.939 [2024-06-11 09:43:37.490722] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.939 [2024-06-11 09:43:37.499813] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.939 [2024-06-11 09:43:37.500593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.939 [2024-06-11 09:43:37.500655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.939 [2024-06-11 09:43:37.500668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.939 [2024-06-11 09:43:37.500920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.939 [2024-06-11 09:43:37.501145] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.939 [2024-06-11 09:43:37.501154] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.939 [2024-06-11 09:43:37.501162] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.939 [2024-06-11 09:43:37.504693] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.939 [2024-06-11 09:43:37.513563] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.939 [2024-06-11 09:43:37.514239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.939 [2024-06-11 09:43:37.514266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.939 [2024-06-11 09:43:37.514275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.939 [2024-06-11 09:43:37.514504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.939 [2024-06-11 09:43:37.514730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.939 [2024-06-11 09:43:37.514739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.939 [2024-06-11 09:43:37.514746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.939 [2024-06-11 09:43:37.518245] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.939 [2024-06-11 09:43:37.527303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.939 [2024-06-11 09:43:37.528001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.939 [2024-06-11 09:43:37.528064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.939 [2024-06-11 09:43:37.528077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.939 [2024-06-11 09:43:37.528342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.939 [2024-06-11 09:43:37.528566] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.939 [2024-06-11 09:43:37.528575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.939 [2024-06-11 09:43:37.528583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.939 [2024-06-11 09:43:37.532094] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.939 [2024-06-11 09:43:37.541175] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.939 [2024-06-11 09:43:37.541818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.939 [2024-06-11 09:43:37.541845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.939 [2024-06-11 09:43:37.541854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.939 [2024-06-11 09:43:37.542074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.939 [2024-06-11 09:43:37.542291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.939 [2024-06-11 09:43:37.542301] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.939 [2024-06-11 09:43:37.542309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.939 [2024-06-11 09:43:37.545823] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.939 [2024-06-11 09:43:37.555118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.939 [2024-06-11 09:43:37.555754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.939 [2024-06-11 09:43:37.555777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.939 [2024-06-11 09:43:37.555785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.939 [2024-06-11 09:43:37.556003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.939 [2024-06-11 09:43:37.556221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.939 [2024-06-11 09:43:37.556229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.939 [2024-06-11 09:43:37.556236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.939 [2024-06-11 09:43:37.559755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.939 [2024-06-11 09:43:37.569043] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.939 [2024-06-11 09:43:37.569784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.939 [2024-06-11 09:43:37.569845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.939 [2024-06-11 09:43:37.569858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.939 [2024-06-11 09:43:37.570110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.939 [2024-06-11 09:43:37.570349] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.939 [2024-06-11 09:43:37.570359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.939 [2024-06-11 09:43:37.570367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.940 [2024-06-11 09:43:37.573885] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.940 [2024-06-11 09:43:37.582992] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.940 [2024-06-11 09:43:37.583729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.940 [2024-06-11 09:43:37.583790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.940 [2024-06-11 09:43:37.583803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.940 [2024-06-11 09:43:37.584056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.940 [2024-06-11 09:43:37.584279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.940 [2024-06-11 09:43:37.584288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.940 [2024-06-11 09:43:37.584296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.940 [2024-06-11 09:43:37.587820] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.940 [2024-06-11 09:43:37.596903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.940 [2024-06-11 09:43:37.597575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.940 [2024-06-11 09:43:37.597636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.940 [2024-06-11 09:43:37.597649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.940 [2024-06-11 09:43:37.597902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.940 [2024-06-11 09:43:37.598126] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.940 [2024-06-11 09:43:37.598134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.940 [2024-06-11 09:43:37.598142] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.940 [2024-06-11 09:43:37.601666] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.940 [2024-06-11 09:43:37.610749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.940 [2024-06-11 09:43:37.611591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.940 [2024-06-11 09:43:37.611660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.940 [2024-06-11 09:43:37.611673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.940 [2024-06-11 09:43:37.611927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.940 [2024-06-11 09:43:37.612151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.940 [2024-06-11 09:43:37.612162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.940 [2024-06-11 09:43:37.612170] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.940 [2024-06-11 09:43:37.615710] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.940 [2024-06-11 09:43:37.624593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.940 [2024-06-11 09:43:37.625232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.940 [2024-06-11 09:43:37.625258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.940 [2024-06-11 09:43:37.625267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.940 [2024-06-11 09:43:37.625497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.940 [2024-06-11 09:43:37.625716] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.940 [2024-06-11 09:43:37.625726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.940 [2024-06-11 09:43:37.625734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.940 [2024-06-11 09:43:37.629251] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.940 [2024-06-11 09:43:37.638359] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.940 [2024-06-11 09:43:37.639067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.940 [2024-06-11 09:43:37.639129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.940 [2024-06-11 09:43:37.639142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.940 [2024-06-11 09:43:37.639417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.940 [2024-06-11 09:43:37.639642] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.940 [2024-06-11 09:43:37.639651] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.940 [2024-06-11 09:43:37.639659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.940 [2024-06-11 09:43:37.643173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.940 [2024-06-11 09:43:37.652260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.940 [2024-06-11 09:43:37.653057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.940 [2024-06-11 09:43:37.653119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.940 [2024-06-11 09:43:37.653131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.940 [2024-06-11 09:43:37.653395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.940 [2024-06-11 09:43:37.653627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.940 [2024-06-11 09:43:37.653636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.940 [2024-06-11 09:43:37.653644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.940 [2024-06-11 09:43:37.657159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.940 [2024-06-11 09:43:37.666065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.940 [2024-06-11 09:43:37.666694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.940 [2024-06-11 09:43:37.666756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.940 [2024-06-11 09:43:37.666769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.940 [2024-06-11 09:43:37.667021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.940 [2024-06-11 09:43:37.667245] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.940 [2024-06-11 09:43:37.667253] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.940 [2024-06-11 09:43:37.667262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.940 [2024-06-11 09:43:37.670796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.940 [2024-06-11 09:43:37.679875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.940 [2024-06-11 09:43:37.680603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.940 [2024-06-11 09:43:37.680664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.940 [2024-06-11 09:43:37.680677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.940 [2024-06-11 09:43:37.680929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.940 [2024-06-11 09:43:37.681152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.940 [2024-06-11 09:43:37.681161] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.940 [2024-06-11 09:43:37.681169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.940 [2024-06-11 09:43:37.684694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.940 [2024-06-11 09:43:37.693781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.940 [2024-06-11 09:43:37.694427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.940 [2024-06-11 09:43:37.694489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.940 [2024-06-11 09:43:37.694501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.940 [2024-06-11 09:43:37.694754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.940 [2024-06-11 09:43:37.694977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.940 [2024-06-11 09:43:37.694987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.940 [2024-06-11 09:43:37.694995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.940 [2024-06-11 09:43:37.698522] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.940 [2024-06-11 09:43:37.707612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.940 [2024-06-11 09:43:37.708275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.940 [2024-06-11 09:43:37.708301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.940 [2024-06-11 09:43:37.708310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.940 [2024-06-11 09:43:37.708540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.940 [2024-06-11 09:43:37.708758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.940 [2024-06-11 09:43:37.708769] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.940 [2024-06-11 09:43:37.708776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.940 [2024-06-11 09:43:37.712296] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.940 [2024-06-11 09:43:37.721377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.940 [2024-06-11 09:43:37.722134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.941 [2024-06-11 09:43:37.722195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.941 [2024-06-11 09:43:37.722208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.941 [2024-06-11 09:43:37.722472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.941 [2024-06-11 09:43:37.722697] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.941 [2024-06-11 09:43:37.722706] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.941 [2024-06-11 09:43:37.722714] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.941 [2024-06-11 09:43:37.726227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.941 [2024-06-11 09:43:37.735304] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.941 [2024-06-11 09:43:37.736005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.941 [2024-06-11 09:43:37.736066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.941 [2024-06-11 09:43:37.736079] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.941 [2024-06-11 09:43:37.736593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.941 [2024-06-11 09:43:37.736820] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.941 [2024-06-11 09:43:37.736829] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.941 [2024-06-11 09:43:37.736838] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.941 [2024-06-11 09:43:37.740359] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.941 [2024-06-11 09:43:37.749065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.941 [2024-06-11 09:43:37.749799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.941 [2024-06-11 09:43:37.749861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:05.941 [2024-06-11 09:43:37.749880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:05.941 [2024-06-11 09:43:37.750133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:05.941 [2024-06-11 09:43:37.750371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.941 [2024-06-11 09:43:37.750381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.941 [2024-06-11 09:43:37.750389] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.203 [2024-06-11 09:43:37.753908] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.203 [2024-06-11 09:43:37.763011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.203 [2024-06-11 09:43:37.763750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.203 [2024-06-11 09:43:37.763812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.203 [2024-06-11 09:43:37.763825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.203 [2024-06-11 09:43:37.764077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.203 [2024-06-11 09:43:37.764301] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.203 [2024-06-11 09:43:37.764309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.203 [2024-06-11 09:43:37.764332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.203 [2024-06-11 09:43:37.767851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.203 [2024-06-11 09:43:37.776931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.203 [2024-06-11 09:43:37.777671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.203 [2024-06-11 09:43:37.777733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.203 [2024-06-11 09:43:37.777746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.203 [2024-06-11 09:43:37.777998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.203 [2024-06-11 09:43:37.778221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.203 [2024-06-11 09:43:37.778230] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.203 [2024-06-11 09:43:37.778239] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.203 [2024-06-11 09:43:37.781772] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.203 [2024-06-11 09:43:37.790869] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.203 [2024-06-11 09:43:37.791628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.203 [2024-06-11 09:43:37.791689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.203 [2024-06-11 09:43:37.791702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.203 [2024-06-11 09:43:37.791956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.203 [2024-06-11 09:43:37.792180] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.203 [2024-06-11 09:43:37.792197] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.203 [2024-06-11 09:43:37.792206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.203 [2024-06-11 09:43:37.795737] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.204 [2024-06-11 09:43:37.804627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.204 [2024-06-11 09:43:37.805258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.204 [2024-06-11 09:43:37.805285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.204 [2024-06-11 09:43:37.805294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.204 [2024-06-11 09:43:37.805526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.204 [2024-06-11 09:43:37.805744] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.204 [2024-06-11 09:43:37.805753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.204 [2024-06-11 09:43:37.805760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.204 [2024-06-11 09:43:37.809273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.204 [2024-06-11 09:43:37.818577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.204 [2024-06-11 09:43:37.819367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.204 [2024-06-11 09:43:37.819429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.204 [2024-06-11 09:43:37.819444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.204 [2024-06-11 09:43:37.819698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.204 [2024-06-11 09:43:37.819922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.204 [2024-06-11 09:43:37.819932] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.204 [2024-06-11 09:43:37.819940] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.204 [2024-06-11 09:43:37.823475] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.204 [2024-06-11 09:43:37.832354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.204 [2024-06-11 09:43:37.833132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.204 [2024-06-11 09:43:37.833193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.204 [2024-06-11 09:43:37.833206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.204 [2024-06-11 09:43:37.833472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.204 [2024-06-11 09:43:37.833698] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.204 [2024-06-11 09:43:37.833707] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.204 [2024-06-11 09:43:37.833715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.204 [2024-06-11 09:43:37.837233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.204 [2024-06-11 09:43:37.846104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.204 [2024-06-11 09:43:37.846862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.204 [2024-06-11 09:43:37.846923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.204 [2024-06-11 09:43:37.846936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.204 [2024-06-11 09:43:37.847189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.204 [2024-06-11 09:43:37.847428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.204 [2024-06-11 09:43:37.847438] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.204 [2024-06-11 09:43:37.847446] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.204 [2024-06-11 09:43:37.850962] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.204 [2024-06-11 09:43:37.860043] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.204 [2024-06-11 09:43:37.860777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.204 [2024-06-11 09:43:37.860839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.204 [2024-06-11 09:43:37.860851] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.204 [2024-06-11 09:43:37.861104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.204 [2024-06-11 09:43:37.861356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.204 [2024-06-11 09:43:37.861367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.204 [2024-06-11 09:43:37.861375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.204 [2024-06-11 09:43:37.864892] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.204 [2024-06-11 09:43:37.873979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.204 [2024-06-11 09:43:37.874747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.204 [2024-06-11 09:43:37.874809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.204 [2024-06-11 09:43:37.874822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.204 [2024-06-11 09:43:37.875074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.204 [2024-06-11 09:43:37.875299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.204 [2024-06-11 09:43:37.875308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.204 [2024-06-11 09:43:37.875328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.204 [2024-06-11 09:43:37.878850] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.204 [2024-06-11 09:43:37.887727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.204 [2024-06-11 09:43:37.888454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.204 [2024-06-11 09:43:37.888517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.204 [2024-06-11 09:43:37.888530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.204 [2024-06-11 09:43:37.888790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.204 [2024-06-11 09:43:37.889013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.204 [2024-06-11 09:43:37.889023] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.204 [2024-06-11 09:43:37.889031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.204 [2024-06-11 09:43:37.892565] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.204 [2024-06-11 09:43:37.901651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.204 [2024-06-11 09:43:37.902331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.204 [2024-06-11 09:43:37.902357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.204 [2024-06-11 09:43:37.902367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.204 [2024-06-11 09:43:37.902587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.204 [2024-06-11 09:43:37.902805] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.204 [2024-06-11 09:43:37.902815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.204 [2024-06-11 09:43:37.902823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.204 [2024-06-11 09:43:37.906326] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.204 [2024-06-11 09:43:37.915397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.204 [2024-06-11 09:43:37.916019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.204 [2024-06-11 09:43:37.916038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.204 [2024-06-11 09:43:37.916046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.204 [2024-06-11 09:43:37.916263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.204 [2024-06-11 09:43:37.916489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.204 [2024-06-11 09:43:37.916498] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.204 [2024-06-11 09:43:37.916505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.204 [2024-06-11 09:43:37.920008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.204 [2024-06-11 09:43:37.929290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.204 [2024-06-11 09:43:37.929911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.204 [2024-06-11 09:43:37.929961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.204 [2024-06-11 09:43:37.929973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.204 [2024-06-11 09:43:37.930217] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.204 [2024-06-11 09:43:37.930449] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.204 [2024-06-11 09:43:37.930459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.204 [2024-06-11 09:43:37.930473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.204 [2024-06-11 09:43:37.933983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.204 [2024-06-11 09:43:37.943067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.204 [2024-06-11 09:43:37.943731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.205 [2024-06-11 09:43:37.943753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.205 [2024-06-11 09:43:37.943762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.205 [2024-06-11 09:43:37.943979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.205 [2024-06-11 09:43:37.944196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.205 [2024-06-11 09:43:37.944205] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.205 [2024-06-11 09:43:37.944213] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.205 [2024-06-11 09:43:37.947718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.205 [2024-06-11 09:43:37.956994] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.205 [2024-06-11 09:43:37.957696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.205 [2024-06-11 09:43:37.957739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.205 [2024-06-11 09:43:37.957750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.205 [2024-06-11 09:43:37.957990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.205 [2024-06-11 09:43:37.958211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.205 [2024-06-11 09:43:37.958221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.205 [2024-06-11 09:43:37.958228] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.205 [2024-06-11 09:43:37.961749] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.205 [2024-06-11 09:43:37.970926] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.205 [2024-06-11 09:43:37.971552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.205 [2024-06-11 09:43:37.971575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.205 [2024-06-11 09:43:37.971583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.205 [2024-06-11 09:43:37.971800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.205 [2024-06-11 09:43:37.972017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.205 [2024-06-11 09:43:37.972025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.205 [2024-06-11 09:43:37.972032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.205 [2024-06-11 09:43:37.975529] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.205 [2024-06-11 09:43:37.984795] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.205 [2024-06-11 09:43:37.985418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.205 [2024-06-11 09:43:37.985460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.205 [2024-06-11 09:43:37.985472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.205 [2024-06-11 09:43:37.985714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.205 [2024-06-11 09:43:37.985935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.205 [2024-06-11 09:43:37.985943] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.205 [2024-06-11 09:43:37.985951] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.205 [2024-06-11 09:43:37.989460] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.205 [2024-06-11 09:43:37.998734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.205 [2024-06-11 09:43:37.999367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.205 [2024-06-11 09:43:37.999386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.205 [2024-06-11 09:43:37.999395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.205 [2024-06-11 09:43:37.999612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.205 [2024-06-11 09:43:37.999828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.205 [2024-06-11 09:43:37.999836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.205 [2024-06-11 09:43:37.999843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.205 [2024-06-11 09:43:38.003341] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.205 [2024-06-11 09:43:38.012625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.205 [2024-06-11 09:43:38.013173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.205 [2024-06-11 09:43:38.013212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.205 [2024-06-11 09:43:38.013223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.205 [2024-06-11 09:43:38.013467] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.205 [2024-06-11 09:43:38.013688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.205 [2024-06-11 09:43:38.013697] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.205 [2024-06-11 09:43:38.013705] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.205 [2024-06-11 09:43:38.017212] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.468 [2024-06-11 09:43:38.026488] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.468 [2024-06-11 09:43:38.027157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.468 [2024-06-11 09:43:38.027196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.468 [2024-06-11 09:43:38.027207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.468 [2024-06-11 09:43:38.027451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.468 [2024-06-11 09:43:38.027677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.468 [2024-06-11 09:43:38.027686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.468 [2024-06-11 09:43:38.027694] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.468 [2024-06-11 09:43:38.031195] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.468 [2024-06-11 09:43:38.040261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.468 [2024-06-11 09:43:38.040956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.468 [2024-06-11 09:43:38.040993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.468 [2024-06-11 09:43:38.041005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.468 [2024-06-11 09:43:38.041241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.468 [2024-06-11 09:43:38.041468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.468 [2024-06-11 09:43:38.041477] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.468 [2024-06-11 09:43:38.041485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.468 [2024-06-11 09:43:38.044980] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.468 [2024-06-11 09:43:38.054044] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.468 [2024-06-11 09:43:38.054679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.468 [2024-06-11 09:43:38.054698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.468 [2024-06-11 09:43:38.054706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.468 [2024-06-11 09:43:38.054923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.468 [2024-06-11 09:43:38.055138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.468 [2024-06-11 09:43:38.055147] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.468 [2024-06-11 09:43:38.055155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.468 [2024-06-11 09:43:38.058650] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.468 [2024-06-11 09:43:38.067925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.468 [2024-06-11 09:43:38.069001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.468 [2024-06-11 09:43:38.069025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.468 [2024-06-11 09:43:38.069033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.468 [2024-06-11 09:43:38.069257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.468 [2024-06-11 09:43:38.069482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.468 [2024-06-11 09:43:38.069490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.468 [2024-06-11 09:43:38.069497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.468 [2024-06-11 09:43:38.072995] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.468 [2024-06-11 09:43:38.081846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.468 [2024-06-11 09:43:38.082450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.468 [2024-06-11 09:43:38.082467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.468 [2024-06-11 09:43:38.082476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.468 [2024-06-11 09:43:38.082692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.468 [2024-06-11 09:43:38.082907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.468 [2024-06-11 09:43:38.082915] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.468 [2024-06-11 09:43:38.082922] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.468 [2024-06-11 09:43:38.086414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.468 [2024-06-11 09:43:38.095704] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.468 [2024-06-11 09:43:38.096413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.468 [2024-06-11 09:43:38.096451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.468 [2024-06-11 09:43:38.096463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.468 [2024-06-11 09:43:38.096699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.468 [2024-06-11 09:43:38.096919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.468 [2024-06-11 09:43:38.096927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.468 [2024-06-11 09:43:38.096935] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.468 [2024-06-11 09:43:38.100440] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.468 [2024-06-11 09:43:38.109506] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.468 [2024-06-11 09:43:38.110140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.468 [2024-06-11 09:43:38.110158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.468 [2024-06-11 09:43:38.110166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.468 [2024-06-11 09:43:38.110386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.468 [2024-06-11 09:43:38.110603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.468 [2024-06-11 09:43:38.110611] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.468 [2024-06-11 09:43:38.110618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.468 [2024-06-11 09:43:38.114109] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.468 [2024-06-11 09:43:38.123373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.468 [2024-06-11 09:43:38.123943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.468 [2024-06-11 09:43:38.123957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.468 [2024-06-11 09:43:38.123970] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.468 [2024-06-11 09:43:38.124185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.468 [2024-06-11 09:43:38.124406] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.468 [2024-06-11 09:43:38.124415] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.468 [2024-06-11 09:43:38.124422] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.468 [2024-06-11 09:43:38.127910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.468 [2024-06-11 09:43:38.137175] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.468 [2024-06-11 09:43:38.137789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.468 [2024-06-11 09:43:38.137827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.468 [2024-06-11 09:43:38.137838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.468 [2024-06-11 09:43:38.138074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.469 [2024-06-11 09:43:38.138294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.469 [2024-06-11 09:43:38.138302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.469 [2024-06-11 09:43:38.138309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.469 [2024-06-11 09:43:38.141815] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.469 [2024-06-11 09:43:38.151089] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.469 [2024-06-11 09:43:38.151788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.469 [2024-06-11 09:43:38.151826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.469 [2024-06-11 09:43:38.151836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.469 [2024-06-11 09:43:38.152072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.469 [2024-06-11 09:43:38.152292] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.469 [2024-06-11 09:43:38.152300] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.469 [2024-06-11 09:43:38.152307] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.469 [2024-06-11 09:43:38.155811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.469 [2024-06-11 09:43:38.164878] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.469 [2024-06-11 09:43:38.165477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.469 [2024-06-11 09:43:38.165497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.469 [2024-06-11 09:43:38.165505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.469 [2024-06-11 09:43:38.165722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.469 [2024-06-11 09:43:38.165942] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.469 [2024-06-11 09:43:38.165950] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.469 [2024-06-11 09:43:38.165956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.469 [2024-06-11 09:43:38.169452] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.469 [2024-06-11 09:43:38.178712] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.469 [2024-06-11 09:43:38.179296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.469 [2024-06-11 09:43:38.179310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.469 [2024-06-11 09:43:38.179323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.469 [2024-06-11 09:43:38.179539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.469 [2024-06-11 09:43:38.179754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.469 [2024-06-11 09:43:38.179762] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.469 [2024-06-11 09:43:38.179769] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.469 [2024-06-11 09:43:38.183254] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.469 [2024-06-11 09:43:38.192516] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.469 [2024-06-11 09:43:38.193090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.469 [2024-06-11 09:43:38.193105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.469 [2024-06-11 09:43:38.193112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.469 [2024-06-11 09:43:38.193333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.469 [2024-06-11 09:43:38.193549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.469 [2024-06-11 09:43:38.193557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.469 [2024-06-11 09:43:38.193564] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.469 [2024-06-11 09:43:38.197049] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.469 [2024-06-11 09:43:38.206312] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.469 [2024-06-11 09:43:38.206883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.469 [2024-06-11 09:43:38.206899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.469 [2024-06-11 09:43:38.206906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.469 [2024-06-11 09:43:38.207121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.469 [2024-06-11 09:43:38.207341] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.469 [2024-06-11 09:43:38.207351] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.469 [2024-06-11 09:43:38.207357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.469 [2024-06-11 09:43:38.210845] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.469 [2024-06-11 09:43:38.220111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.469 [2024-06-11 09:43:38.220743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.469 [2024-06-11 09:43:38.220759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.469 [2024-06-11 09:43:38.220766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.469 [2024-06-11 09:43:38.220981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.469 [2024-06-11 09:43:38.221196] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.469 [2024-06-11 09:43:38.221204] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.469 [2024-06-11 09:43:38.221210] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.469 [2024-06-11 09:43:38.224700] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.469 [2024-06-11 09:43:38.233967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.469 [2024-06-11 09:43:38.234540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.469 [2024-06-11 09:43:38.234556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.469 [2024-06-11 09:43:38.234563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.469 [2024-06-11 09:43:38.234778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.469 [2024-06-11 09:43:38.234994] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.469 [2024-06-11 09:43:38.235001] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.469 [2024-06-11 09:43:38.235008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.469 [2024-06-11 09:43:38.238497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.469 [2024-06-11 09:43:38.247762] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.469 [2024-06-11 09:43:38.248336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.469 [2024-06-11 09:43:38.248351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.469 [2024-06-11 09:43:38.248358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.469 [2024-06-11 09:43:38.248573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.469 [2024-06-11 09:43:38.248789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.469 [2024-06-11 09:43:38.248796] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.469 [2024-06-11 09:43:38.248803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.469 [2024-06-11 09:43:38.252292] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.469 [2024-06-11 09:43:38.261565] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.469 [2024-06-11 09:43:38.262136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.469 [2024-06-11 09:43:38.262150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.469 [2024-06-11 09:43:38.262162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.469 [2024-06-11 09:43:38.262383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.469 [2024-06-11 09:43:38.262599] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.469 [2024-06-11 09:43:38.262606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.469 [2024-06-11 09:43:38.262613] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.469 [2024-06-11 09:43:38.266098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.469 [2024-06-11 09:43:38.275361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.469 [2024-06-11 09:43:38.275930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.469 [2024-06-11 09:43:38.275945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.469 [2024-06-11 09:43:38.275952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.470 [2024-06-11 09:43:38.276167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.470 [2024-06-11 09:43:38.276387] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.470 [2024-06-11 09:43:38.276395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.470 [2024-06-11 09:43:38.276402] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.470 [2024-06-11 09:43:38.279889] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.732 [2024-06-11 09:43:38.289160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.732 [2024-06-11 09:43:38.289773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.732 [2024-06-11 09:43:38.289788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.732 [2024-06-11 09:43:38.289796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.732 [2024-06-11 09:43:38.290011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.732 [2024-06-11 09:43:38.290226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.732 [2024-06-11 09:43:38.290235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.732 [2024-06-11 09:43:38.290242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.732 [2024-06-11 09:43:38.293734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.732 [2024-06-11 09:43:38.302996] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.732 [2024-06-11 09:43:38.303645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.732 [2024-06-11 09:43:38.303683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.732 [2024-06-11 09:43:38.303694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.732 [2024-06-11 09:43:38.303929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.732 [2024-06-11 09:43:38.304149] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.732 [2024-06-11 09:43:38.304162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.732 [2024-06-11 09:43:38.304169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.732 [2024-06-11 09:43:38.307671] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.732 [2024-06-11 09:43:38.316732] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.732 [2024-06-11 09:43:38.317326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.732 [2024-06-11 09:43:38.317345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.732 [2024-06-11 09:43:38.317354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.732 [2024-06-11 09:43:38.317570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.732 [2024-06-11 09:43:38.317786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.732 [2024-06-11 09:43:38.317794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.732 [2024-06-11 09:43:38.317801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.732 [2024-06-11 09:43:38.321375] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.732 [2024-06-11 09:43:38.330641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.732 [2024-06-11 09:43:38.331220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.732 [2024-06-11 09:43:38.331235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.732 [2024-06-11 09:43:38.331242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.732 [2024-06-11 09:43:38.331463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.732 [2024-06-11 09:43:38.331680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.732 [2024-06-11 09:43:38.331688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.732 [2024-06-11 09:43:38.331695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.732 [2024-06-11 09:43:38.335179] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.732 [2024-06-11 09:43:38.344445] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.732 [2024-06-11 09:43:38.345085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.732 [2024-06-11 09:43:38.345122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.732 [2024-06-11 09:43:38.345133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.732 [2024-06-11 09:43:38.345376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.732 [2024-06-11 09:43:38.345597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.732 [2024-06-11 09:43:38.345605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.732 [2024-06-11 09:43:38.345613] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.732 [2024-06-11 09:43:38.349106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.732 [2024-06-11 09:43:38.358376] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.732 [2024-06-11 09:43:38.359103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.733 [2024-06-11 09:43:38.359140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.733 [2024-06-11 09:43:38.359151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.733 [2024-06-11 09:43:38.359393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.733 [2024-06-11 09:43:38.359614] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.733 [2024-06-11 09:43:38.359622] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.733 [2024-06-11 09:43:38.359630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.733 [2024-06-11 09:43:38.363136] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.733 [2024-06-11 09:43:38.372204] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.733 [2024-06-11 09:43:38.372871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.733 [2024-06-11 09:43:38.372909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.733 [2024-06-11 09:43:38.372920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.733 [2024-06-11 09:43:38.373155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.733 [2024-06-11 09:43:38.373383] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.733 [2024-06-11 09:43:38.373392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.733 [2024-06-11 09:43:38.373399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.733 [2024-06-11 09:43:38.376897] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.733 [2024-06-11 09:43:38.385960] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.733 [2024-06-11 09:43:38.386574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.733 [2024-06-11 09:43:38.386593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.733 [2024-06-11 09:43:38.386601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.733 [2024-06-11 09:43:38.386818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.733 [2024-06-11 09:43:38.387034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.733 [2024-06-11 09:43:38.387043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.733 [2024-06-11 09:43:38.387050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.733 [2024-06-11 09:43:38.390544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.733 [2024-06-11 09:43:38.399806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.733 [2024-06-11 09:43:38.400388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.733 [2024-06-11 09:43:38.400426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.733 [2024-06-11 09:43:38.400438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.733 [2024-06-11 09:43:38.400681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.733 [2024-06-11 09:43:38.400901] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.733 [2024-06-11 09:43:38.400909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.733 [2024-06-11 09:43:38.400917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.733 [2024-06-11 09:43:38.404419] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.733 [2024-06-11 09:43:38.413689] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.733 [2024-06-11 09:43:38.414290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.733 [2024-06-11 09:43:38.414334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.733 [2024-06-11 09:43:38.414347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.733 [2024-06-11 09:43:38.414583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.733 [2024-06-11 09:43:38.414803] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.733 [2024-06-11 09:43:38.414811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.733 [2024-06-11 09:43:38.414819] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.733 [2024-06-11 09:43:38.418318] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.733 [2024-06-11 09:43:38.427586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.733 [2024-06-11 09:43:38.428300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.733 [2024-06-11 09:43:38.428346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.733 [2024-06-11 09:43:38.428358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.733 [2024-06-11 09:43:38.428595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.733 [2024-06-11 09:43:38.428814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.733 [2024-06-11 09:43:38.428823] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.733 [2024-06-11 09:43:38.428830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.733 [2024-06-11 09:43:38.432330] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.733 [2024-06-11 09:43:38.441393] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.733 [2024-06-11 09:43:38.442089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.733 [2024-06-11 09:43:38.442126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.733 [2024-06-11 09:43:38.442136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.733 [2024-06-11 09:43:38.442379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.733 [2024-06-11 09:43:38.442599] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.733 [2024-06-11 09:43:38.442608] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.733 [2024-06-11 09:43:38.442620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.733 [2024-06-11 09:43:38.446114] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.733 [2024-06-11 09:43:38.455185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.733 [2024-06-11 09:43:38.455781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.733 [2024-06-11 09:43:38.455800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.733 [2024-06-11 09:43:38.455808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.733 [2024-06-11 09:43:38.456024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.733 [2024-06-11 09:43:38.456240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.733 [2024-06-11 09:43:38.456247] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.733 [2024-06-11 09:43:38.456254] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.733 [2024-06-11 09:43:38.459746] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.733 [2024-06-11 09:43:38.469018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.733 [2024-06-11 09:43:38.469594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.733 [2024-06-11 09:43:38.469610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.733 [2024-06-11 09:43:38.469617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.733 [2024-06-11 09:43:38.469833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.733 [2024-06-11 09:43:38.470048] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.733 [2024-06-11 09:43:38.470057] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.733 [2024-06-11 09:43:38.470064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.733 [2024-06-11 09:43:38.473558] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.733 [2024-06-11 09:43:38.482816] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.733 [2024-06-11 09:43:38.483529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.733 [2024-06-11 09:43:38.483566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.733 [2024-06-11 09:43:38.483577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.733 [2024-06-11 09:43:38.483812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.733 [2024-06-11 09:43:38.484031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.733 [2024-06-11 09:43:38.484040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.733 [2024-06-11 09:43:38.484047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.733 [2024-06-11 09:43:38.487556] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.733 [2024-06-11 09:43:38.496622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.734 [2024-06-11 09:43:38.497335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.734 [2024-06-11 09:43:38.497377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.734 [2024-06-11 09:43:38.497388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.734 [2024-06-11 09:43:38.497623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.734 [2024-06-11 09:43:38.497843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.734 [2024-06-11 09:43:38.497851] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.734 [2024-06-11 09:43:38.497859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.734 [2024-06-11 09:43:38.501363] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.734 [2024-06-11 09:43:38.510433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.734 [2024-06-11 09:43:38.511069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.734 [2024-06-11 09:43:38.511106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.734 [2024-06-11 09:43:38.511117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.734 [2024-06-11 09:43:38.511359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.734 [2024-06-11 09:43:38.511580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.734 [2024-06-11 09:43:38.511588] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.734 [2024-06-11 09:43:38.511596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.734 [2024-06-11 09:43:38.515090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.734 [2024-06-11 09:43:38.524361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.734 [2024-06-11 09:43:38.525015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.734 [2024-06-11 09:43:38.525052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.734 [2024-06-11 09:43:38.525064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.734 [2024-06-11 09:43:38.525301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.734 [2024-06-11 09:43:38.525530] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.734 [2024-06-11 09:43:38.525539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.734 [2024-06-11 09:43:38.525546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.734 [2024-06-11 09:43:38.529041] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.734 [2024-06-11 09:43:38.538105] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.734 [2024-06-11 09:43:38.538651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.734 [2024-06-11 09:43:38.538669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.734 [2024-06-11 09:43:38.538677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.734 [2024-06-11 09:43:38.538893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.734 [2024-06-11 09:43:38.539114] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.734 [2024-06-11 09:43:38.539124] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.734 [2024-06-11 09:43:38.539130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.734 [2024-06-11 09:43:38.542626] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.996 [2024-06-11 09:43:38.551898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.996 [2024-06-11 09:43:38.552564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.996 [2024-06-11 09:43:38.552601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.996 [2024-06-11 09:43:38.552612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.996 [2024-06-11 09:43:38.552848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.996 [2024-06-11 09:43:38.553068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.996 [2024-06-11 09:43:38.553076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.996 [2024-06-11 09:43:38.553084] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.996 [2024-06-11 09:43:38.556586] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.997 [2024-06-11 09:43:38.565666] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.997 [2024-06-11 09:43:38.566372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-06-11 09:43:38.566409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.997 [2024-06-11 09:43:38.566421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.997 [2024-06-11 09:43:38.566660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.997 [2024-06-11 09:43:38.566880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.997 [2024-06-11 09:43:38.566889] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.997 [2024-06-11 09:43:38.566896] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.997 [2024-06-11 09:43:38.570400] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.997 [2024-06-11 09:43:38.579463] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.997 [2024-06-11 09:43:38.580138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-06-11 09:43:38.580176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.997 [2024-06-11 09:43:38.580186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.997 [2024-06-11 09:43:38.580429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.997 [2024-06-11 09:43:38.580650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.997 [2024-06-11 09:43:38.580658] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.997 [2024-06-11 09:43:38.580665] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.997 [2024-06-11 09:43:38.584166] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.997 [2024-06-11 09:43:38.593231] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.997 [2024-06-11 09:43:38.593896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-06-11 09:43:38.593934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.997 [2024-06-11 09:43:38.593944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.997 [2024-06-11 09:43:38.594180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.997 [2024-06-11 09:43:38.594408] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.997 [2024-06-11 09:43:38.594417] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.997 [2024-06-11 09:43:38.594425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.997 [2024-06-11 09:43:38.597919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.997 [2024-06-11 09:43:38.606978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.997 [2024-06-11 09:43:38.607615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-06-11 09:43:38.607634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.997 [2024-06-11 09:43:38.607642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.997 [2024-06-11 09:43:38.607858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.997 [2024-06-11 09:43:38.608074] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.997 [2024-06-11 09:43:38.608081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.997 [2024-06-11 09:43:38.608088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.997 [2024-06-11 09:43:38.611581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.997 [2024-06-11 09:43:38.620843] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.997 [2024-06-11 09:43:38.621615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-06-11 09:43:38.621653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.997 [2024-06-11 09:43:38.621664] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.997 [2024-06-11 09:43:38.621899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.997 [2024-06-11 09:43:38.622119] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.997 [2024-06-11 09:43:38.622127] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.997 [2024-06-11 09:43:38.622134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.997 [2024-06-11 09:43:38.625637] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.997 [2024-06-11 09:43:38.634700] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.997 [2024-06-11 09:43:38.635244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-06-11 09:43:38.635263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.997 [2024-06-11 09:43:38.635275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.997 [2024-06-11 09:43:38.635497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.997 [2024-06-11 09:43:38.635713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.997 [2024-06-11 09:43:38.635721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.997 [2024-06-11 09:43:38.635727] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.997 [2024-06-11 09:43:38.639219] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.997 [2024-06-11 09:43:38.648487] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.997 [2024-06-11 09:43:38.649012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-06-11 09:43:38.649027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.997 [2024-06-11 09:43:38.649035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.997 [2024-06-11 09:43:38.649250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.997 [2024-06-11 09:43:38.649470] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.997 [2024-06-11 09:43:38.649479] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.997 [2024-06-11 09:43:38.649486] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.997 [2024-06-11 09:43:38.652975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.997 [2024-06-11 09:43:38.662255] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.997 [2024-06-11 09:43:38.662836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-06-11 09:43:38.662851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.997 [2024-06-11 09:43:38.662858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.997 [2024-06-11 09:43:38.663073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.997 [2024-06-11 09:43:38.663289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.997 [2024-06-11 09:43:38.663297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.997 [2024-06-11 09:43:38.663304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.997 [2024-06-11 09:43:38.666797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.997 [2024-06-11 09:43:38.676062] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.997 [2024-06-11 09:43:38.676723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-06-11 09:43:38.676760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.997 [2024-06-11 09:43:38.676771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.997 [2024-06-11 09:43:38.677005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.997 [2024-06-11 09:43:38.677225] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.997 [2024-06-11 09:43:38.677237] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.997 [2024-06-11 09:43:38.677245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.997 [2024-06-11 09:43:38.680746] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.997 [2024-06-11 09:43:38.689806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.997 [2024-06-11 09:43:38.690553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.997 [2024-06-11 09:43:38.690590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.997 [2024-06-11 09:43:38.690601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.997 [2024-06-11 09:43:38.690837] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.997 [2024-06-11 09:43:38.691057] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.997 [2024-06-11 09:43:38.691065] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.998 [2024-06-11 09:43:38.691072] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.998 [2024-06-11 09:43:38.694577] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.998 [2024-06-11 09:43:38.703637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.998 [2024-06-11 09:43:38.704312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-06-11 09:43:38.704357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.998 [2024-06-11 09:43:38.704368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.998 [2024-06-11 09:43:38.704603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.998 [2024-06-11 09:43:38.704822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.998 [2024-06-11 09:43:38.704830] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.998 [2024-06-11 09:43:38.704837] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.998 [2024-06-11 09:43:38.708334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.998 [2024-06-11 09:43:38.717398] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.998 [2024-06-11 09:43:38.717970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-06-11 09:43:38.718008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.998 [2024-06-11 09:43:38.718018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.998 [2024-06-11 09:43:38.718254] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.998 [2024-06-11 09:43:38.718481] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.998 [2024-06-11 09:43:38.718490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.998 [2024-06-11 09:43:38.718498] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.998 [2024-06-11 09:43:38.721992] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.998 [2024-06-11 09:43:38.731265] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.998 [2024-06-11 09:43:38.731883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-06-11 09:43:38.731921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.998 [2024-06-11 09:43:38.731931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.998 [2024-06-11 09:43:38.732167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.998 [2024-06-11 09:43:38.732395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.998 [2024-06-11 09:43:38.732404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.998 [2024-06-11 09:43:38.732411] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.998 [2024-06-11 09:43:38.735908] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.998 [2024-06-11 09:43:38.745179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.998 [2024-06-11 09:43:38.745941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-06-11 09:43:38.745978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.998 [2024-06-11 09:43:38.745989] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.998 [2024-06-11 09:43:38.746225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.998 [2024-06-11 09:43:38.746454] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.998 [2024-06-11 09:43:38.746463] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.998 [2024-06-11 09:43:38.746470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.998 [2024-06-11 09:43:38.749964] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.998 [2024-06-11 09:43:38.759022] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.998 [2024-06-11 09:43:38.759716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-06-11 09:43:38.759754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.998 [2024-06-11 09:43:38.759766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.998 [2024-06-11 09:43:38.760002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.998 [2024-06-11 09:43:38.760222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.998 [2024-06-11 09:43:38.760231] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.998 [2024-06-11 09:43:38.760238] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.998 [2024-06-11 09:43:38.763755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.998 [2024-06-11 09:43:38.772816] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.998 [2024-06-11 09:43:38.773426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-06-11 09:43:38.773463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.998 [2024-06-11 09:43:38.773475] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.998 [2024-06-11 09:43:38.773717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.998 [2024-06-11 09:43:38.773938] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.998 [2024-06-11 09:43:38.773947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.998 [2024-06-11 09:43:38.773955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.998 [2024-06-11 09:43:38.777459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.998 [2024-06-11 09:43:38.786720] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.998 [2024-06-11 09:43:38.787397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-06-11 09:43:38.787434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.998 [2024-06-11 09:43:38.787446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.998 [2024-06-11 09:43:38.787685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.998 [2024-06-11 09:43:38.787904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.998 [2024-06-11 09:43:38.787913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.998 [2024-06-11 09:43:38.787921] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.998 [2024-06-11 09:43:38.791426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.998 [2024-06-11 09:43:38.800489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.998 [2024-06-11 09:43:38.801153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.998 [2024-06-11 09:43:38.801191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:06.998 [2024-06-11 09:43:38.801201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:06.998 [2024-06-11 09:43:38.801445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:06.998 [2024-06-11 09:43:38.801666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.998 [2024-06-11 09:43:38.801675] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.998 [2024-06-11 09:43:38.801683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.998 [2024-06-11 09:43:38.805179] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.261 [2024-06-11 09:43:38.814242] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.261 [2024-06-11 09:43:38.815025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.261 [2024-06-11 09:43:38.815063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.261 [2024-06-11 09:43:38.815074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.261 [2024-06-11 09:43:38.815309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.261 [2024-06-11 09:43:38.815537] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.261 [2024-06-11 09:43:38.815546] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.261 [2024-06-11 09:43:38.815558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.261 [2024-06-11 09:43:38.819055] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.261 [2024-06-11 09:43:38.828115] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.261 [2024-06-11 09:43:38.828808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.261 [2024-06-11 09:43:38.828845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.261 [2024-06-11 09:43:38.828856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.261 [2024-06-11 09:43:38.829091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.261 [2024-06-11 09:43:38.829311] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.261 [2024-06-11 09:43:38.829328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.261 [2024-06-11 09:43:38.829336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.261 [2024-06-11 09:43:38.832831] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.261 [2024-06-11 09:43:38.841893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.261 [2024-06-11 09:43:38.842567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.261 [2024-06-11 09:43:38.842604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.261 [2024-06-11 09:43:38.842615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.261 [2024-06-11 09:43:38.842850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.261 [2024-06-11 09:43:38.843070] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.261 [2024-06-11 09:43:38.843078] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.261 [2024-06-11 09:43:38.843086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.261 [2024-06-11 09:43:38.846588] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.261 [2024-06-11 09:43:38.855653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.261 [2024-06-11 09:43:38.856348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.261 [2024-06-11 09:43:38.856385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.261 [2024-06-11 09:43:38.856398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.261 [2024-06-11 09:43:38.856635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.261 [2024-06-11 09:43:38.856854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.261 [2024-06-11 09:43:38.856862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.261 [2024-06-11 09:43:38.856870] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.261 [2024-06-11 09:43:38.860374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.261 [2024-06-11 09:43:38.869446] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.261 [2024-06-11 09:43:38.870047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.261 [2024-06-11 09:43:38.870085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.261 [2024-06-11 09:43:38.870095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.261 [2024-06-11 09:43:38.870339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.261 [2024-06-11 09:43:38.870560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.261 [2024-06-11 09:43:38.870569] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.261 [2024-06-11 09:43:38.870576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.261 [2024-06-11 09:43:38.874069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.261 [2024-06-11 09:43:38.883334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.261 [2024-06-11 09:43:38.884054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.261 [2024-06-11 09:43:38.884092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.261 [2024-06-11 09:43:38.884103] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.261 [2024-06-11 09:43:38.884346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.261 [2024-06-11 09:43:38.884567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.261 [2024-06-11 09:43:38.884576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.261 [2024-06-11 09:43:38.884583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.261 [2024-06-11 09:43:38.888077] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.261 [2024-06-11 09:43:38.897134] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.261 [2024-06-11 09:43:38.897797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.261 [2024-06-11 09:43:38.897834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.261 [2024-06-11 09:43:38.897845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.262 [2024-06-11 09:43:38.898080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.262 [2024-06-11 09:43:38.898300] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.262 [2024-06-11 09:43:38.898308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.262 [2024-06-11 09:43:38.898324] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.262 [2024-06-11 09:43:38.901819] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.262 [2024-06-11 09:43:38.910879] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.262 [2024-06-11 09:43:38.911417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.262 [2024-06-11 09:43:38.911454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.262 [2024-06-11 09:43:38.911466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.262 [2024-06-11 09:43:38.911709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.262 [2024-06-11 09:43:38.911929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.262 [2024-06-11 09:43:38.911938] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.262 [2024-06-11 09:43:38.911945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.262 [2024-06-11 09:43:38.915450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.262 [2024-06-11 09:43:38.924713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.262 [2024-06-11 09:43:38.925225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.262 [2024-06-11 09:43:38.925262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.262 [2024-06-11 09:43:38.925273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.262 [2024-06-11 09:43:38.925516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.262 [2024-06-11 09:43:38.925737] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.262 [2024-06-11 09:43:38.925745] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.262 [2024-06-11 09:43:38.925753] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.262 [2024-06-11 09:43:38.929245] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.262 [2024-06-11 09:43:38.938508] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.262 [2024-06-11 09:43:38.939208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.262 [2024-06-11 09:43:38.939246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.262 [2024-06-11 09:43:38.939256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.262 [2024-06-11 09:43:38.939501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.262 [2024-06-11 09:43:38.939723] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.262 [2024-06-11 09:43:38.939732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.262 [2024-06-11 09:43:38.939739] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.262 [2024-06-11 09:43:38.943233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.262 [2024-06-11 09:43:38.952292] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.262 [2024-06-11 09:43:38.953023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.262 [2024-06-11 09:43:38.953060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.262 [2024-06-11 09:43:38.953071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.262 [2024-06-11 09:43:38.953306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.262 [2024-06-11 09:43:38.953535] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.262 [2024-06-11 09:43:38.953544] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.262 [2024-06-11 09:43:38.953556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.262 [2024-06-11 09:43:38.957052] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.262 [2024-06-11 09:43:38.966125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.262 [2024-06-11 09:43:38.966766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.262 [2024-06-11 09:43:38.966803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.262 [2024-06-11 09:43:38.966813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.262 [2024-06-11 09:43:38.967049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.262 [2024-06-11 09:43:38.967269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.262 [2024-06-11 09:43:38.967277] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.262 [2024-06-11 09:43:38.967285] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.262 [2024-06-11 09:43:38.970790] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.262 [2024-06-11 09:43:38.980052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.262 [2024-06-11 09:43:38.980728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.262 [2024-06-11 09:43:38.980766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.262 [2024-06-11 09:43:38.980777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.262 [2024-06-11 09:43:38.981012] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.262 [2024-06-11 09:43:38.981231] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.262 [2024-06-11 09:43:38.981240] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.262 [2024-06-11 09:43:38.981247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.262 [2024-06-11 09:43:38.984751] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.262 [2024-06-11 09:43:38.993808] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.262 [2024-06-11 09:43:38.994543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.262 [2024-06-11 09:43:38.994580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.262 [2024-06-11 09:43:38.994591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.262 [2024-06-11 09:43:38.994827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.262 [2024-06-11 09:43:38.995047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.262 [2024-06-11 09:43:38.995056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.262 [2024-06-11 09:43:38.995063] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.262 [2024-06-11 09:43:38.998646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.262 [2024-06-11 09:43:39.007709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.262 [2024-06-11 09:43:39.008457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.262 [2024-06-11 09:43:39.008499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.262 [2024-06-11 09:43:39.008511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.262 [2024-06-11 09:43:39.008747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.262 [2024-06-11 09:43:39.008967] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.262 [2024-06-11 09:43:39.008976] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.262 [2024-06-11 09:43:39.008984] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.262 [2024-06-11 09:43:39.012487] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.262 [2024-06-11 09:43:39.021547] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.262 [2024-06-11 09:43:39.022175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.262 [2024-06-11 09:43:39.022193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.262 [2024-06-11 09:43:39.022201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.262 [2024-06-11 09:43:39.022423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.262 [2024-06-11 09:43:39.022639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.262 [2024-06-11 09:43:39.022647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.262 [2024-06-11 09:43:39.022654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.262 [2024-06-11 09:43:39.026162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.262 [2024-06-11 09:43:39.035422] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.262 [2024-06-11 09:43:39.036125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.262 [2024-06-11 09:43:39.036162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.262 [2024-06-11 09:43:39.036173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.262 [2024-06-11 09:43:39.036416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.262 [2024-06-11 09:43:39.036636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.263 [2024-06-11 09:43:39.036645] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.263 [2024-06-11 09:43:39.036652] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.263 [2024-06-11 09:43:39.040145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.263 [2024-06-11 09:43:39.049205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.263 [2024-06-11 09:43:39.049926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.263 [2024-06-11 09:43:39.049964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.263 [2024-06-11 09:43:39.049974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.263 [2024-06-11 09:43:39.050210] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.263 [2024-06-11 09:43:39.050444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.263 [2024-06-11 09:43:39.050454] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.263 [2024-06-11 09:43:39.050461] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.263 [2024-06-11 09:43:39.053955] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.263 [2024-06-11 09:43:39.063023] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.263 [2024-06-11 09:43:39.063721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.263 [2024-06-11 09:43:39.063758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.263 [2024-06-11 09:43:39.063769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.263 [2024-06-11 09:43:39.064004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.263 [2024-06-11 09:43:39.064224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.263 [2024-06-11 09:43:39.064233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.263 [2024-06-11 09:43:39.064240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.263 [2024-06-11 09:43:39.067743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.525 [2024-06-11 09:43:39.076808] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.525 [2024-06-11 09:43:39.077574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-06-11 09:43:39.077612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.525 [2024-06-11 09:43:39.077623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.525 [2024-06-11 09:43:39.077858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.525 [2024-06-11 09:43:39.078078] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.525 [2024-06-11 09:43:39.078086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.525 [2024-06-11 09:43:39.078093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.525 [2024-06-11 09:43:39.081597] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.525 [2024-06-11 09:43:39.090684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.525 [2024-06-11 09:43:39.091371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-06-11 09:43:39.091408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.525 [2024-06-11 09:43:39.091420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.525 [2024-06-11 09:43:39.091657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.525 [2024-06-11 09:43:39.091877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.525 [2024-06-11 09:43:39.091885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.525 [2024-06-11 09:43:39.091893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.525 [2024-06-11 09:43:39.095403] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.525 [2024-06-11 09:43:39.104463] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.525 [2024-06-11 09:43:39.105178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-06-11 09:43:39.105215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.525 [2024-06-11 09:43:39.105226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.525 [2024-06-11 09:43:39.105470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.525 [2024-06-11 09:43:39.105691] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.525 [2024-06-11 09:43:39.105699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.525 [2024-06-11 09:43:39.105706] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.525 [2024-06-11 09:43:39.109201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.525 [2024-06-11 09:43:39.118259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.525 [2024-06-11 09:43:39.118954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-06-11 09:43:39.118991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.525 [2024-06-11 09:43:39.119002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.525 [2024-06-11 09:43:39.119237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.525 [2024-06-11 09:43:39.119466] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.525 [2024-06-11 09:43:39.119476] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.525 [2024-06-11 09:43:39.119483] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.525 [2024-06-11 09:43:39.122981] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.525 [2024-06-11 09:43:39.132041] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.525 [2024-06-11 09:43:39.132721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-06-11 09:43:39.132759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.525 [2024-06-11 09:43:39.132769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.525 [2024-06-11 09:43:39.133004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.525 [2024-06-11 09:43:39.133224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.525 [2024-06-11 09:43:39.133233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.525 [2024-06-11 09:43:39.133240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.526 [2024-06-11 09:43:39.136743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.526 [2024-06-11 09:43:39.145803] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.526 [2024-06-11 09:43:39.146322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-06-11 09:43:39.146341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.526 [2024-06-11 09:43:39.146354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.526 [2024-06-11 09:43:39.146571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.526 [2024-06-11 09:43:39.146787] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.526 [2024-06-11 09:43:39.146795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.526 [2024-06-11 09:43:39.146802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.526 [2024-06-11 09:43:39.150290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.526 [2024-06-11 09:43:39.159586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.526 [2024-06-11 09:43:39.160214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-06-11 09:43:39.160230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.526 [2024-06-11 09:43:39.160237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.526 [2024-06-11 09:43:39.160457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.526 [2024-06-11 09:43:39.160673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.526 [2024-06-11 09:43:39.160682] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.526 [2024-06-11 09:43:39.160689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.526 [2024-06-11 09:43:39.164186] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.526 [2024-06-11 09:43:39.173452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.526 [2024-06-11 09:43:39.174126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-06-11 09:43:39.174164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.526 [2024-06-11 09:43:39.174175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.526 [2024-06-11 09:43:39.174417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.526 [2024-06-11 09:43:39.174638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.526 [2024-06-11 09:43:39.174647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.526 [2024-06-11 09:43:39.174654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.526 [2024-06-11 09:43:39.178155] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.526 [2024-06-11 09:43:39.187217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.526 [2024-06-11 09:43:39.187811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-06-11 09:43:39.187831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.526 [2024-06-11 09:43:39.187839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.526 [2024-06-11 09:43:39.188055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.526 [2024-06-11 09:43:39.188272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.526 [2024-06-11 09:43:39.188284] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.526 [2024-06-11 09:43:39.188291] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.526 [2024-06-11 09:43:39.191787] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.526 [2024-06-11 09:43:39.201053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.526 [2024-06-11 09:43:39.201732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-06-11 09:43:39.201770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.526 [2024-06-11 09:43:39.201780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.526 [2024-06-11 09:43:39.202016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.526 [2024-06-11 09:43:39.202235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.526 [2024-06-11 09:43:39.202244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.526 [2024-06-11 09:43:39.202251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.526 [2024-06-11 09:43:39.205755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.526 [2024-06-11 09:43:39.214820] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.526 [2024-06-11 09:43:39.215602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-06-11 09:43:39.215639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.526 [2024-06-11 09:43:39.215650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.526 [2024-06-11 09:43:39.215885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.526 [2024-06-11 09:43:39.216105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.526 [2024-06-11 09:43:39.216114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.526 [2024-06-11 09:43:39.216121] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.526 [2024-06-11 09:43:39.219623] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.526 [2024-06-11 09:43:39.228684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.526 [2024-06-11 09:43:39.229389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-06-11 09:43:39.229427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.526 [2024-06-11 09:43:39.229440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.526 [2024-06-11 09:43:39.229678] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.526 [2024-06-11 09:43:39.229898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.526 [2024-06-11 09:43:39.229906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.526 [2024-06-11 09:43:39.229914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.527 [2024-06-11 09:43:39.233424] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.527 [2024-06-11 09:43:39.242500] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.527 [2024-06-11 09:43:39.243056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-06-11 09:43:39.243093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.527 [2024-06-11 09:43:39.243103] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.527 [2024-06-11 09:43:39.243348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.527 [2024-06-11 09:43:39.243569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.527 [2024-06-11 09:43:39.243577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.527 [2024-06-11 09:43:39.243584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.527 [2024-06-11 09:43:39.247083] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.527 [2024-06-11 09:43:39.256359] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.527 [2024-06-11 09:43:39.256978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-06-11 09:43:39.256996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.527 [2024-06-11 09:43:39.257004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.527 [2024-06-11 09:43:39.257220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.527 [2024-06-11 09:43:39.257442] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.527 [2024-06-11 09:43:39.257450] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.527 [2024-06-11 09:43:39.257457] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.527 [2024-06-11 09:43:39.260952] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.527 [2024-06-11 09:43:39.270239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.527 [2024-06-11 09:43:39.270892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-06-11 09:43:39.270929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.527 [2024-06-11 09:43:39.270940] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.527 [2024-06-11 09:43:39.271175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.527 [2024-06-11 09:43:39.271402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.527 [2024-06-11 09:43:39.271411] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.527 [2024-06-11 09:43:39.271419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.527 [2024-06-11 09:43:39.274918] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.527 [2024-06-11 09:43:39.284005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.527 [2024-06-11 09:43:39.284726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-06-11 09:43:39.284763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.527 [2024-06-11 09:43:39.284774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.527 [2024-06-11 09:43:39.285013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.527 [2024-06-11 09:43:39.285233] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.527 [2024-06-11 09:43:39.285242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.527 [2024-06-11 09:43:39.285250] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.527 [2024-06-11 09:43:39.288752] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.527 [2024-06-11 09:43:39.297808] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.527 [2024-06-11 09:43:39.298525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-06-11 09:43:39.298562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.527 [2024-06-11 09:43:39.298573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.527 [2024-06-11 09:43:39.298809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.527 [2024-06-11 09:43:39.299028] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.527 [2024-06-11 09:43:39.299037] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.527 [2024-06-11 09:43:39.299044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.527 [2024-06-11 09:43:39.302547] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.527 [2024-06-11 09:43:39.311602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.527 [2024-06-11 09:43:39.312300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-06-11 09:43:39.312345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.527 [2024-06-11 09:43:39.312355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.527 [2024-06-11 09:43:39.312591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.527 [2024-06-11 09:43:39.312811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.527 [2024-06-11 09:43:39.312819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.527 [2024-06-11 09:43:39.312826] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.527 [2024-06-11 09:43:39.316322] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.527 [2024-06-11 09:43:39.325377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.527 [2024-06-11 09:43:39.326097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-06-11 09:43:39.326135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.527 [2024-06-11 09:43:39.326145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.527 [2024-06-11 09:43:39.326389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.527 [2024-06-11 09:43:39.326610] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.527 [2024-06-11 09:43:39.326619] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.527 [2024-06-11 09:43:39.326630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.527 [2024-06-11 09:43:39.330128] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.789 [2024-06-11 09:43:39.339194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.789 [2024-06-11 09:43:39.339794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.789 [2024-06-11 09:43:39.339831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.789 [2024-06-11 09:43:39.339841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.789 [2024-06-11 09:43:39.340077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.789 [2024-06-11 09:43:39.340297] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.789 [2024-06-11 09:43:39.340306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.789 [2024-06-11 09:43:39.340313] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.789 [2024-06-11 09:43:39.343817] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.789 [2024-06-11 09:43:39.353082] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.789 [2024-06-11 09:43:39.353676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.789 [2024-06-11 09:43:39.353713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.789 [2024-06-11 09:43:39.353724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.789 [2024-06-11 09:43:39.353960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.789 [2024-06-11 09:43:39.354179] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.790 [2024-06-11 09:43:39.354187] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.790 [2024-06-11 09:43:39.354195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.790 [2024-06-11 09:43:39.357698] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.790 [2024-06-11 09:43:39.366971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.790 [2024-06-11 09:43:39.367652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.790 [2024-06-11 09:43:39.367690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.790 [2024-06-11 09:43:39.367700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.790 [2024-06-11 09:43:39.367936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.790 [2024-06-11 09:43:39.368155] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.790 [2024-06-11 09:43:39.368164] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.790 [2024-06-11 09:43:39.368171] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.790 [2024-06-11 09:43:39.371673] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.790 [2024-06-11 09:43:39.380728] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.790 [2024-06-11 09:43:39.381408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.790 [2024-06-11 09:43:39.381446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.790 [2024-06-11 09:43:39.381457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.790 [2024-06-11 09:43:39.381692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.790 [2024-06-11 09:43:39.381911] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.790 [2024-06-11 09:43:39.381920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.790 [2024-06-11 09:43:39.381927] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.790 [2024-06-11 09:43:39.385432] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.790 [2024-06-11 09:43:39.394483] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.790 [2024-06-11 09:43:39.395109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.790 [2024-06-11 09:43:39.395127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.790 [2024-06-11 09:43:39.395135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.790 [2024-06-11 09:43:39.395357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.790 [2024-06-11 09:43:39.395573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.790 [2024-06-11 09:43:39.395582] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.790 [2024-06-11 09:43:39.395589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.790 [2024-06-11 09:43:39.399075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.790 [2024-06-11 09:43:39.408367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.790 [2024-06-11 09:43:39.409089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.790 [2024-06-11 09:43:39.409125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.790 [2024-06-11 09:43:39.409136] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.790 [2024-06-11 09:43:39.409380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.790 [2024-06-11 09:43:39.409601] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.790 [2024-06-11 09:43:39.409609] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.790 [2024-06-11 09:43:39.409617] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.790 [2024-06-11 09:43:39.413109] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.790 [2024-06-11 09:43:39.422160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.790 [2024-06-11 09:43:39.422853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.790 [2024-06-11 09:43:39.422890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.790 [2024-06-11 09:43:39.422901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.790 [2024-06-11 09:43:39.423145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.790 [2024-06-11 09:43:39.423375] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.790 [2024-06-11 09:43:39.423384] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.790 [2024-06-11 09:43:39.423392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.790 [2024-06-11 09:43:39.426885] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.790 [2024-06-11 09:43:39.435938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.790 [2024-06-11 09:43:39.436655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.790 [2024-06-11 09:43:39.436693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.790 [2024-06-11 09:43:39.436703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.790 [2024-06-11 09:43:39.436939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.790 [2024-06-11 09:43:39.437159] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.790 [2024-06-11 09:43:39.437167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.790 [2024-06-11 09:43:39.437175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.790 [2024-06-11 09:43:39.440678] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.790 [2024-06-11 09:43:39.449733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.790 [2024-06-11 09:43:39.450542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.790 [2024-06-11 09:43:39.450579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.790 [2024-06-11 09:43:39.450590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.790 [2024-06-11 09:43:39.450825] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.790 [2024-06-11 09:43:39.451044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.790 [2024-06-11 09:43:39.451053] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.790 [2024-06-11 09:43:39.451060] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.790 [2024-06-11 09:43:39.454564] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.790 [2024-06-11 09:43:39.463626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.790 [2024-06-11 09:43:39.464207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.790 [2024-06-11 09:43:39.464244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.790 [2024-06-11 09:43:39.464256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.790 [2024-06-11 09:43:39.464502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.790 [2024-06-11 09:43:39.464723] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.790 [2024-06-11 09:43:39.464731] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.790 [2024-06-11 09:43:39.464743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.790 [2024-06-11 09:43:39.468242] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.790 [2024-06-11 09:43:39.477548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.790 [2024-06-11 09:43:39.478208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.790 [2024-06-11 09:43:39.478245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.790 [2024-06-11 09:43:39.478256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.790 [2024-06-11 09:43:39.478499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.790 [2024-06-11 09:43:39.478719] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.790 [2024-06-11 09:43:39.478728] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.790 [2024-06-11 09:43:39.478735] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.790 [2024-06-11 09:43:39.482231] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.790 [2024-06-11 09:43:39.491292] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.790 [2024-06-11 09:43:39.492025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.790 [2024-06-11 09:43:39.492062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.790 [2024-06-11 09:43:39.492072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.790 [2024-06-11 09:43:39.492308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.790 [2024-06-11 09:43:39.492537] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.790 [2024-06-11 09:43:39.492546] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.791 [2024-06-11 09:43:39.492553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.791 [2024-06-11 09:43:39.496047] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.791 [2024-06-11 09:43:39.505102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.791 [2024-06-11 09:43:39.505737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.791 [2024-06-11 09:43:39.505774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.791 [2024-06-11 09:43:39.505785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.791 [2024-06-11 09:43:39.506020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.791 [2024-06-11 09:43:39.506240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.791 [2024-06-11 09:43:39.506248] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.791 [2024-06-11 09:43:39.506255] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.791 [2024-06-11 09:43:39.509759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.791 [2024-06-11 09:43:39.519027] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.791 [2024-06-11 09:43:39.519705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.791 [2024-06-11 09:43:39.519746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.791 [2024-06-11 09:43:39.519757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.791 [2024-06-11 09:43:39.519992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.791 [2024-06-11 09:43:39.520212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.791 [2024-06-11 09:43:39.520220] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.791 [2024-06-11 09:43:39.520228] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.791 [2024-06-11 09:43:39.523730] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.791 [2024-06-11 09:43:39.532784] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.791 [2024-06-11 09:43:39.533496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.791 [2024-06-11 09:43:39.533534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.791 [2024-06-11 09:43:39.533544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.791 [2024-06-11 09:43:39.533780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.791 [2024-06-11 09:43:39.533999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.791 [2024-06-11 09:43:39.534007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.791 [2024-06-11 09:43:39.534015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.791 [2024-06-11 09:43:39.537532] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.791 [2024-06-11 09:43:39.546592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.791 [2024-06-11 09:43:39.547221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.791 [2024-06-11 09:43:39.547239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.791 [2024-06-11 09:43:39.547247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.791 [2024-06-11 09:43:39.547469] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.791 [2024-06-11 09:43:39.547686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.791 [2024-06-11 09:43:39.547694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.791 [2024-06-11 09:43:39.547701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.791 [2024-06-11 09:43:39.551187] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.791 [2024-06-11 09:43:39.560447] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.791 [2024-06-11 09:43:39.560923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.791 [2024-06-11 09:43:39.560960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.791 [2024-06-11 09:43:39.560972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.791 [2024-06-11 09:43:39.561208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.791 [2024-06-11 09:43:39.561451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.791 [2024-06-11 09:43:39.561461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.791 [2024-06-11 09:43:39.561468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.791 [2024-06-11 09:43:39.564961] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.791 [2024-06-11 09:43:39.574220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.791 [2024-06-11 09:43:39.574807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.791 [2024-06-11 09:43:39.574825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.791 [2024-06-11 09:43:39.574832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.791 [2024-06-11 09:43:39.575049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.791 [2024-06-11 09:43:39.575264] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.791 [2024-06-11 09:43:39.575272] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.791 [2024-06-11 09:43:39.575279] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.791 [2024-06-11 09:43:39.578774] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.791 [2024-06-11 09:43:39.588031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.791 [2024-06-11 09:43:39.588696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.791 [2024-06-11 09:43:39.588733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.791 [2024-06-11 09:43:39.588744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.791 [2024-06-11 09:43:39.588979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.791 [2024-06-11 09:43:39.589199] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.791 [2024-06-11 09:43:39.589207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.791 [2024-06-11 09:43:39.589215] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.791 [2024-06-11 09:43:39.592725] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.791 [2024-06-11 09:43:39.601786] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.791 [2024-06-11 09:43:39.602503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.791 [2024-06-11 09:43:39.602540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:07.791 [2024-06-11 09:43:39.602551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:07.791 [2024-06-11 09:43:39.602787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:07.791 [2024-06-11 09:43:39.603007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.791 [2024-06-11 09:43:39.603015] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.791 [2024-06-11 09:43:39.603022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.053 [2024-06-11 09:43:39.606535] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.053 [2024-06-11 09:43:39.615600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.053 [2024-06-11 09:43:39.616296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.053 [2024-06-11 09:43:39.616340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.053 [2024-06-11 09:43:39.616351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.053 [2024-06-11 09:43:39.616587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.053 [2024-06-11 09:43:39.616807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.053 [2024-06-11 09:43:39.616815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.053 [2024-06-11 09:43:39.616822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.053 [2024-06-11 09:43:39.620320] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.053 [2024-06-11 09:43:39.629375] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.053 [2024-06-11 09:43:39.630091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.053 [2024-06-11 09:43:39.630128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.053 [2024-06-11 09:43:39.630139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.053 [2024-06-11 09:43:39.630383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.053 [2024-06-11 09:43:39.630604] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.053 [2024-06-11 09:43:39.630612] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.053 [2024-06-11 09:43:39.630620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.053 [2024-06-11 09:43:39.634112] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.053 [2024-06-11 09:43:39.643169] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.053 [2024-06-11 09:43:39.643891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.053 [2024-06-11 09:43:39.643928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.053 [2024-06-11 09:43:39.643939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.053 [2024-06-11 09:43:39.644174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.054 [2024-06-11 09:43:39.644403] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.054 [2024-06-11 09:43:39.644412] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.054 [2024-06-11 09:43:39.644420] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.054 [2024-06-11 09:43:39.647914] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.054 [2024-06-11 09:43:39.656971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.054 [2024-06-11 09:43:39.657684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.054 [2024-06-11 09:43:39.657721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.054 [2024-06-11 09:43:39.657736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.054 [2024-06-11 09:43:39.657972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.054 [2024-06-11 09:43:39.658192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.054 [2024-06-11 09:43:39.658200] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.054 [2024-06-11 09:43:39.658207] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.054 [2024-06-11 09:43:39.661718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.054 [2024-06-11 09:43:39.670774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.054 [2024-06-11 09:43:39.671417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.054 [2024-06-11 09:43:39.671455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.054 [2024-06-11 09:43:39.671466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.054 [2024-06-11 09:43:39.671701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.054 [2024-06-11 09:43:39.671921] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.054 [2024-06-11 09:43:39.671929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.054 [2024-06-11 09:43:39.671937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.054 [2024-06-11 09:43:39.675442] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.054 [2024-06-11 09:43:39.684517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.054 [2024-06-11 09:43:39.685137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.054 [2024-06-11 09:43:39.685155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.054 [2024-06-11 09:43:39.685163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.054 [2024-06-11 09:43:39.685386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.054 [2024-06-11 09:43:39.685603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.054 [2024-06-11 09:43:39.685611] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.054 [2024-06-11 09:43:39.685618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.054 [2024-06-11 09:43:39.689111] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.054 [2024-06-11 09:43:39.698384] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.054 [2024-06-11 09:43:39.698868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.054 [2024-06-11 09:43:39.698882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.054 [2024-06-11 09:43:39.698890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.054 [2024-06-11 09:43:39.699106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.054 [2024-06-11 09:43:39.699327] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.054 [2024-06-11 09:43:39.699340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.054 [2024-06-11 09:43:39.699347] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.054 [2024-06-11 09:43:39.702845] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.054 [2024-06-11 09:43:39.712119] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.054 [2024-06-11 09:43:39.712700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.054 [2024-06-11 09:43:39.712715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.054 [2024-06-11 09:43:39.712723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.054 [2024-06-11 09:43:39.712939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.054 [2024-06-11 09:43:39.713154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.054 [2024-06-11 09:43:39.713163] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.054 [2024-06-11 09:43:39.713170] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.054 [2024-06-11 09:43:39.716665] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.054 [2024-06-11 09:43:39.725940] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.054 [2024-06-11 09:43:39.726490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.054 [2024-06-11 09:43:39.726506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.054 [2024-06-11 09:43:39.726513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.054 [2024-06-11 09:43:39.726728] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.054 [2024-06-11 09:43:39.726946] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.054 [2024-06-11 09:43:39.726955] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.054 [2024-06-11 09:43:39.726961] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.054 [2024-06-11 09:43:39.730458] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.054 [2024-06-11 09:43:39.739920] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.054 [2024-06-11 09:43:39.740612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.054 [2024-06-11 09:43:39.740650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.054 [2024-06-11 09:43:39.740660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.054 [2024-06-11 09:43:39.740896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.054 [2024-06-11 09:43:39.741117] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.054 [2024-06-11 09:43:39.741128] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.054 [2024-06-11 09:43:39.741135] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.054 [2024-06-11 09:43:39.744634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.054 [2024-06-11 09:43:39.753702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.054 [2024-06-11 09:43:39.754377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.054 [2024-06-11 09:43:39.754415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.054 [2024-06-11 09:43:39.754427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.054 [2024-06-11 09:43:39.754665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.054 [2024-06-11 09:43:39.754884] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.054 [2024-06-11 09:43:39.754893] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.054 [2024-06-11 09:43:39.754900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.054 [2024-06-11 09:43:39.758401] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.054 [2024-06-11 09:43:39.767481] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.054 [2024-06-11 09:43:39.768075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.054 [2024-06-11 09:43:39.768093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.054 [2024-06-11 09:43:39.768101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.054 [2024-06-11 09:43:39.768325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.054 [2024-06-11 09:43:39.768542] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.054 [2024-06-11 09:43:39.768550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.054 [2024-06-11 09:43:39.768558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.054 [2024-06-11 09:43:39.772069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.054 [2024-06-11 09:43:39.781356] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.054 [2024-06-11 09:43:39.781897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.054 [2024-06-11 09:43:39.781934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.054 [2024-06-11 09:43:39.781947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.054 [2024-06-11 09:43:39.782183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.054 [2024-06-11 09:43:39.782414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.054 [2024-06-11 09:43:39.782425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.054 [2024-06-11 09:43:39.782432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.055 [2024-06-11 09:43:39.785930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.055 [2024-06-11 09:43:39.795206] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.055 [2024-06-11 09:43:39.795864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.055 [2024-06-11 09:43:39.795901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.055 [2024-06-11 09:43:39.795911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.055 [2024-06-11 09:43:39.796151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.055 [2024-06-11 09:43:39.796382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.055 [2024-06-11 09:43:39.796392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.055 [2024-06-11 09:43:39.796399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.055 [2024-06-11 09:43:39.799905] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.055 [2024-06-11 09:43:39.808978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.055 [2024-06-11 09:43:39.809649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.055 [2024-06-11 09:43:39.809687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.055 [2024-06-11 09:43:39.809698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.055 [2024-06-11 09:43:39.809933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.055 [2024-06-11 09:43:39.810153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.055 [2024-06-11 09:43:39.810162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.055 [2024-06-11 09:43:39.810169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.055 [2024-06-11 09:43:39.813678] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.055 [2024-06-11 09:43:39.822755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.055 [2024-06-11 09:43:39.823423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.055 [2024-06-11 09:43:39.823461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.055 [2024-06-11 09:43:39.823473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.055 [2024-06-11 09:43:39.823709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.055 [2024-06-11 09:43:39.823929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.055 [2024-06-11 09:43:39.823937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.055 [2024-06-11 09:43:39.823945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.055 [2024-06-11 09:43:39.827444] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.055 [2024-06-11 09:43:39.836505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.055 [2024-06-11 09:43:39.837089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.055 [2024-06-11 09:43:39.837107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.055 [2024-06-11 09:43:39.837115] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.055 [2024-06-11 09:43:39.837337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.055 [2024-06-11 09:43:39.837553] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.055 [2024-06-11 09:43:39.837560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.055 [2024-06-11 09:43:39.837573] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.055 [2024-06-11 09:43:39.841066] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.055 [2024-06-11 09:43:39.850330] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.055 [2024-06-11 09:43:39.850923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.055 [2024-06-11 09:43:39.850938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.055 [2024-06-11 09:43:39.850946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.055 [2024-06-11 09:43:39.851161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.055 [2024-06-11 09:43:39.851384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.055 [2024-06-11 09:43:39.851393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.055 [2024-06-11 09:43:39.851400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.055 [2024-06-11 09:43:39.854891] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.055 [2024-06-11 09:43:39.864179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.055 [2024-06-11 09:43:39.864828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.055 [2024-06-11 09:43:39.864866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.055 [2024-06-11 09:43:39.864876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.055 [2024-06-11 09:43:39.865111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.055 [2024-06-11 09:43:39.865342] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.055 [2024-06-11 09:43:39.865352] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.055 [2024-06-11 09:43:39.865359] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.319 [2024-06-11 09:43:39.868859] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.319 [2024-06-11 09:43:39.877935] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.319 [2024-06-11 09:43:39.878634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.319 [2024-06-11 09:43:39.878671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.319 [2024-06-11 09:43:39.878682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.319 [2024-06-11 09:43:39.878917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.319 [2024-06-11 09:43:39.879137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.319 [2024-06-11 09:43:39.879146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.319 [2024-06-11 09:43:39.879154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.319 [2024-06-11 09:43:39.882654] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.319 [2024-06-11 09:43:39.891713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.319 [2024-06-11 09:43:39.892307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.319 [2024-06-11 09:43:39.892331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.319 [2024-06-11 09:43:39.892339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.319 [2024-06-11 09:43:39.892555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.319 [2024-06-11 09:43:39.892771] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.319 [2024-06-11 09:43:39.892779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.319 [2024-06-11 09:43:39.892787] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.319 [2024-06-11 09:43:39.896276] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.319 [2024-06-11 09:43:39.905558] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.319 [2024-06-11 09:43:39.906130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.319 [2024-06-11 09:43:39.906145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.319 [2024-06-11 09:43:39.906153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.319 [2024-06-11 09:43:39.906374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.319 [2024-06-11 09:43:39.906591] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.319 [2024-06-11 09:43:39.906599] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.319 [2024-06-11 09:43:39.906605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.319 [2024-06-11 09:43:39.910098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.319 [2024-06-11 09:43:39.919374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.319 [2024-06-11 09:43:39.920036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.319 [2024-06-11 09:43:39.920073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.319 [2024-06-11 09:43:39.920084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.319 [2024-06-11 09:43:39.920329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.319 [2024-06-11 09:43:39.920550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.319 [2024-06-11 09:43:39.920558] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.319 [2024-06-11 09:43:39.920565] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1324154 Killed "${NVMF_APP[@]}" "$@" 00:29:08.319 09:43:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:08.319 09:43:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:08.319 09:43:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:08.319 09:43:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:08.319 09:43:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.319 [2024-06-11 09:43:39.924064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.319 09:43:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1325625 00:29:08.319 09:43:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1325625 00:29:08.319 09:43:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:08.319 09:43:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1325625 ']' 00:29:08.319 09:43:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.319 09:43:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:08.319 09:43:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.319 09:43:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:08.319 09:43:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.319 [2024-06-11 09:43:39.933137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.319 [2024-06-11 09:43:39.933800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.319 [2024-06-11 09:43:39.933819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.319 [2024-06-11 09:43:39.933827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.319 [2024-06-11 09:43:39.934044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.319 [2024-06-11 09:43:39.934260] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.319 [2024-06-11 09:43:39.934268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.319 [2024-06-11 09:43:39.934275] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.319 [2024-06-11 09:43:39.937780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.319 [2024-06-11 09:43:39.947057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.319 [2024-06-11 09:43:39.947731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.319 [2024-06-11 09:43:39.947769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.319 [2024-06-11 09:43:39.947780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.319 [2024-06-11 09:43:39.948016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.319 [2024-06-11 09:43:39.948236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.319 [2024-06-11 09:43:39.948245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.319 [2024-06-11 09:43:39.948253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.319 [2024-06-11 09:43:39.951758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.319 [2024-06-11 09:43:39.960825] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.319 [2024-06-11 09:43:39.961577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.319 [2024-06-11 09:43:39.961615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.319 [2024-06-11 09:43:39.961626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.319 [2024-06-11 09:43:39.961862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.319 [2024-06-11 09:43:39.962086] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.319 [2024-06-11 09:43:39.962096] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.319 [2024-06-11 09:43:39.962103] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.319 [2024-06-11 09:43:39.965618] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.319 [2024-06-11 09:43:39.966148] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:29:08.319 [2024-06-11 09:43:39.966193] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.319 [2024-06-11 09:43:39.974683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.319 [2024-06-11 09:43:39.975167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.319 [2024-06-11 09:43:39.975186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.319 [2024-06-11 09:43:39.975194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.319 [2024-06-11 09:43:39.975416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.319 [2024-06-11 09:43:39.975632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.319 [2024-06-11 09:43:39.975640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.319 [2024-06-11 09:43:39.975647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.319 [2024-06-11 09:43:39.979139] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.319 [2024-06-11 09:43:39.988614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.319 [2024-06-11 09:43:39.989195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.320 [2024-06-11 09:43:39.989210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.320 [2024-06-11 09:43:39.989218] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.320 [2024-06-11 09:43:39.989439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.320 [2024-06-11 09:43:39.989656] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.320 [2024-06-11 09:43:39.989667] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.320 [2024-06-11 09:43:39.989674] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.320 [2024-06-11 09:43:39.993163] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.320 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.320 [2024-06-11 09:43:40.002910] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.320 [2024-06-11 09:43:40.003540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.320 [2024-06-11 09:43:40.003578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.320 [2024-06-11 09:43:40.003590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.320 [2024-06-11 09:43:40.003830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.320 [2024-06-11 09:43:40.004055] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.320 [2024-06-11 09:43:40.004064] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.320 [2024-06-11 09:43:40.004072] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.320 [2024-06-11 09:43:40.007584] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.320 [2024-06-11 09:43:40.016656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.320 [2024-06-11 09:43:40.017228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.320 [2024-06-11 09:43:40.017247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.320 [2024-06-11 09:43:40.017255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.320 [2024-06-11 09:43:40.017478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.320 [2024-06-11 09:43:40.017695] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.320 [2024-06-11 09:43:40.017703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.320 [2024-06-11 09:43:40.017711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.320 [2024-06-11 09:43:40.021208] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.320 [2024-06-11 09:43:40.030571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.320 [2024-06-11 09:43:40.031069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:08.320 [2024-06-11 09:43:40.031198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.320 [2024-06-11 09:43:40.031213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.320 [2024-06-11 09:43:40.031221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.320 [2024-06-11 09:43:40.031445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.320 [2024-06-11 09:43:40.031662] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.320 [2024-06-11 09:43:40.031670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.320 [2024-06-11 09:43:40.031677] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.320 [2024-06-11 09:43:40.035171] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.320 [2024-06-11 09:43:40.044453] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.320 [2024-06-11 09:43:40.044924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.320 [2024-06-11 09:43:40.044947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.320 [2024-06-11 09:43:40.044955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.320 [2024-06-11 09:43:40.045176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.320 [2024-06-11 09:43:40.045402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.320 [2024-06-11 09:43:40.045414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.320 [2024-06-11 09:43:40.045421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.320 [2024-06-11 09:43:40.048921] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.320 [2024-06-11 09:43:40.058197] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.320 [2024-06-11 09:43:40.058942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.320 [2024-06-11 09:43:40.058982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.320 [2024-06-11 09:43:40.058993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.320 [2024-06-11 09:43:40.059231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.320 [2024-06-11 09:43:40.059459] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.320 [2024-06-11 09:43:40.059470] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.320 [2024-06-11 09:43:40.059478] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.320 [2024-06-11 09:43:40.062988] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.320 [2024-06-11 09:43:40.072065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.320 [2024-06-11 09:43:40.072811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.320 [2024-06-11 09:43:40.072851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.320 [2024-06-11 09:43:40.072862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.320 [2024-06-11 09:43:40.073100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.320 [2024-06-11 09:43:40.073328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.320 [2024-06-11 09:43:40.073339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.320 [2024-06-11 09:43:40.073347] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.320 [2024-06-11 09:43:40.076852] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.320 [2024-06-11 09:43:40.085917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.320 [2024-06-11 09:43:40.086627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.320 [2024-06-11 09:43:40.086667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.320 [2024-06-11 09:43:40.086678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.320 [2024-06-11 09:43:40.086914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.320 [2024-06-11 09:43:40.087135] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.320 [2024-06-11 09:43:40.087145] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.320 [2024-06-11 09:43:40.087153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.320 [2024-06-11 09:43:40.090657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.320 [2024-06-11 09:43:40.097344] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.320 [2024-06-11 09:43:40.097373] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.320 [2024-06-11 09:43:40.097380] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.320 [2024-06-11 09:43:40.097390] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.320 [2024-06-11 09:43:40.097396] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.320 [2024-06-11 09:43:40.097532] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.320 [2024-06-11 09:43:40.097691] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.320 [2024-06-11 09:43:40.097692] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:08.320 [2024-06-11 09:43:40.099724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.320 [2024-06-11 09:43:40.100204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.320 [2024-06-11 09:43:40.100224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.320 [2024-06-11 09:43:40.100232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.320 [2024-06-11 09:43:40.100454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.320 [2024-06-11 09:43:40.100671] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.320 [2024-06-11 09:43:40.100682] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.320 [2024-06-11 09:43:40.100689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.320 [2024-06-11 09:43:40.104180] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.320 [2024-06-11 09:43:40.113653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.320 [2024-06-11 09:43:40.114248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.320 [2024-06-11 09:43:40.114264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.320 [2024-06-11 09:43:40.114272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.321 [2024-06-11 09:43:40.114497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.321 [2024-06-11 09:43:40.114714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.321 [2024-06-11 09:43:40.114723] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.321 [2024-06-11 09:43:40.114730] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.321 [2024-06-11 09:43:40.118219] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.321 [2024-06-11 09:43:40.127489] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.321 [2024-06-11 09:43:40.128175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.321 [2024-06-11 09:43:40.128216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.321 [2024-06-11 09:43:40.128227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.321 [2024-06-11 09:43:40.128474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.321 [2024-06-11 09:43:40.128697] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.321 [2024-06-11 09:43:40.128706] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.321 [2024-06-11 09:43:40.128714] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.321 [2024-06-11 09:43:40.132220] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.584 [2024-06-11 09:43:40.141291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.584 [2024-06-11 09:43:40.141825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.584 [2024-06-11 09:43:40.141846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.584 [2024-06-11 09:43:40.141854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.584 [2024-06-11 09:43:40.142070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.584 [2024-06-11 09:43:40.142287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.584 [2024-06-11 09:43:40.142297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.584 [2024-06-11 09:43:40.142305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.584 [2024-06-11 09:43:40.145804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.584 [2024-06-11 09:43:40.155069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.584 [2024-06-11 09:43:40.155786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.584 [2024-06-11 09:43:40.155827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.584 [2024-06-11 09:43:40.155839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.584 [2024-06-11 09:43:40.156079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.584 [2024-06-11 09:43:40.156300] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.584 [2024-06-11 09:43:40.156310] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.584 [2024-06-11 09:43:40.156326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.584 [2024-06-11 09:43:40.159820] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.584 [2024-06-11 09:43:40.168897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.584 [2024-06-11 09:43:40.169622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.584 [2024-06-11 09:43:40.169661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.584 [2024-06-11 09:43:40.169674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.584 [2024-06-11 09:43:40.169914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.584 [2024-06-11 09:43:40.170135] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.584 [2024-06-11 09:43:40.170145] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.584 [2024-06-11 09:43:40.170153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.584 [2024-06-11 09:43:40.173653] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.584 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:08.584 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:29:08.584 09:43:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:08.584 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:08.584 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.584 [2024-06-11 09:43:40.182719] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.584 [2024-06-11 09:43:40.183172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.584 [2024-06-11 09:43:40.183192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.584 [2024-06-11 09:43:40.183200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.584 [2024-06-11 09:43:40.183421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.584 [2024-06-11 09:43:40.183640] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.584 [2024-06-11 09:43:40.183649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.584 [2024-06-11 09:43:40.183656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.584 [2024-06-11 09:43:40.187146] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.584 [2024-06-11 09:43:40.196623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.584 [2024-06-11 09:43:40.197218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.584 [2024-06-11 09:43:40.197234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.584 [2024-06-11 09:43:40.197242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.584 [2024-06-11 09:43:40.197462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.584 [2024-06-11 09:43:40.197680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.584 [2024-06-11 09:43:40.197690] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.584 [2024-06-11 09:43:40.197697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.584 [2024-06-11 09:43:40.201189] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.584 [2024-06-11 09:43:40.210458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.584 [2024-06-11 09:43:40.211119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.584 [2024-06-11 09:43:40.211158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.584 [2024-06-11 09:43:40.211170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.584 [2024-06-11 09:43:40.211413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.584 [2024-06-11 09:43:40.211634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.584 [2024-06-11 09:43:40.211645] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.584 [2024-06-11 09:43:40.211653] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.584 [2024-06-11 09:43:40.215147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.584 09:43:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.584 09:43:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:08.584 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.584 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.584 [2024-06-11 09:43:40.223578] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.584 [2024-06-11 09:43:40.224217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.584 [2024-06-11 09:43:40.224808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.584 [2024-06-11 09:43:40.224827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.584 [2024-06-11 09:43:40.224835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.584 [2024-06-11 09:43:40.225051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.584 [2024-06-11 09:43:40.225268] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.584 [2024-06-11 09:43:40.225277] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.584 [2024-06-11 09:43:40.225284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.584 [2024-06-11 09:43:40.228778] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.584 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.584 09:43:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:08.584 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.584 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.584 [2024-06-11 09:43:40.238047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.584 [2024-06-11 09:43:40.238613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.584 [2024-06-11 09:43:40.238631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.584 [2024-06-11 09:43:40.238639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.584 [2024-06-11 09:43:40.238855] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.584 [2024-06-11 09:43:40.239071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.584 [2024-06-11 09:43:40.239081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.584 [2024-06-11 09:43:40.239088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.584 [2024-06-11 09:43:40.242578] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.584 [2024-06-11 09:43:40.251842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.584 [2024-06-11 09:43:40.252557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.584 [2024-06-11 09:43:40.252597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.584 [2024-06-11 09:43:40.252609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.585 [2024-06-11 09:43:40.252846] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.585 [2024-06-11 09:43:40.253068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.585 [2024-06-11 09:43:40.253079] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.585 [2024-06-11 09:43:40.253087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.585 [2024-06-11 09:43:40.256595] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.585 [2024-06-11 09:43:40.265686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.585 Malloc0 00:29:08.585 [2024-06-11 09:43:40.266411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.585 [2024-06-11 09:43:40.266451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.585 [2024-06-11 09:43:40.266465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.585 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.585 [2024-06-11 09:43:40.266704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.585 [2024-06-11 09:43:40.266925] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.585 [2024-06-11 09:43:40.266936] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.585 [2024-06-11 09:43:40.266944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.585 09:43:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:08.585 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.585 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.585 [2024-06-11 09:43:40.270451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.585 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.585 09:43:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:08.585 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.585 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.585 [2024-06-11 09:43:40.279521] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.585 [2024-06-11 09:43:40.280121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.585 [2024-06-11 09:43:40.280141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.585 [2024-06-11 09:43:40.280150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.585 [2024-06-11 09:43:40.280372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.585 [2024-06-11 09:43:40.280590] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.585 [2024-06-11 09:43:40.280600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.585 [2024-06-11 09:43:40.280608] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.585 [2024-06-11 09:43:40.284098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.585 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.585 09:43:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.585 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.585 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:08.585 [2024-06-11 09:43:40.293373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.585 [2024-06-11 09:43:40.293950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.585 [2024-06-11 09:43:40.293966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229b840 with addr=10.0.0.2, port=4420 00:29:08.585 [2024-06-11 09:43:40.293978] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229b840 is same with the state(5) to be set 00:29:08.585 [2024-06-11 09:43:40.294194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229b840 (9): Bad file descriptor 00:29:08.585 [2024-06-11 09:43:40.294416] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:08.585 [2024-06-11 09:43:40.294427] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:08.585 [2024-06-11 09:43:40.294434] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.585 [2024-06-11 09:43:40.297691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.585 [2024-06-11 09:43:40.297922] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:08.585 09:43:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.585 09:43:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1324520 00:29:08.585 [2024-06-11 09:43:40.307282] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.585 [2024-06-11 09:43:40.343294] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:18.615 00:29:18.615 Latency(us) 00:29:18.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.615 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:18.615 Verification LBA range: start 0x0 length 0x4000 00:29:18.615 Nvme1n1 : 15.01 6967.46 27.22 8418.86 0.00 8293.39 788.48 16820.91 00:29:18.615 =================================================================================================================== 00:29:18.615 Total : 6967.46 27.22 8418.86 0.00 8293.39 788.48 16820.91 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:18.615 rmmod nvme_tcp 00:29:18.615 rmmod nvme_fabrics 00:29:18.615 rmmod nvme_keyring 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:18.615 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1325625 ']' 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1325625 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 1325625 ']' 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 1325625 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1325625 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1325625' 00:29:18.616 killing process with pid 1325625 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 1325625 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 1325625 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:18.616 09:43:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.563 09:43:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:20.563 00:29:20.563 real 0m27.336s 00:29:20.563 user 1m2.028s 00:29:20.563 sys 0m7.019s 00:29:20.563 09:43:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:20.563 09:43:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:20.563 ************************************ 00:29:20.563 END TEST nvmf_bdevperf 00:29:20.563 ************************************ 00:29:20.563 09:43:51 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:20.563 09:43:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:20.563 09:43:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:20.563 09:43:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.563 ************************************ 00:29:20.563 START TEST nvmf_target_disconnect 00:29:20.563 ************************************ 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:20.563 * Looking for test storage... 00:29:20.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:20.563 09:43:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:28.709 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.709 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:28.709 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:28.709 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:28.709 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:28.709 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:28.709 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:28.709 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:28.709 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:28.709 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:28.709 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:28.709 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:28.710 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:28.710 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:28.710 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:28.710 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:28.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:29:28.710 00:29:28.710 --- 10.0.0.2 ping statistics --- 00:29:28.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.710 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:29:28.710 00:29:28.710 --- 10.0.0.1 ping statistics --- 00:29:28.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.710 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:28.710 ************************************ 00:29:28.710 START TEST nvmf_target_disconnect_tc1 00:29:28.710 ************************************ 00:29:28.710 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:28.711 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.711 [2024-06-11 09:43:59.550782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.711 [2024-06-11 09:43:59.550857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15451d0 with addr=10.0.0.2, port=4420 00:29:28.711 [2024-06-11 09:43:59.550888] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:28.711 [2024-06-11 09:43:59.550903] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:28.711 [2024-06-11 09:43:59.550912] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:28.711 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:28.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:28.711 Initializing NVMe Controllers 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:28.711 00:29:28.711 real 0m0.136s 00:29:28.711 user 0m0.054s 00:29:28.711 sys 0m0.081s 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:28.711 ************************************ 00:29:28.711 END TEST nvmf_target_disconnect_tc1 00:29:28.711 ************************************ 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:28.711 ************************************ 00:29:28.711 START TEST nvmf_target_disconnect_tc2 00:29:28.711 ************************************ 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1331690 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1331690 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1331690 ']' 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:28.711 09:43:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.711 [2024-06-11 09:43:59.714426] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:29:28.711 [2024-06-11 09:43:59.714498] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.711 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.711 [2024-06-11 09:43:59.804288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:28.711 [2024-06-11 09:43:59.898752] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.711 [2024-06-11 09:43:59.898804] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.711 [2024-06-11 09:43:59.898813] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.711 [2024-06-11 09:43:59.898821] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.711 [2024-06-11 09:43:59.898826] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.711 [2024-06-11 09:43:59.898993] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:29:28.711 [2024-06-11 09:43:59.899157] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:29:28.711 [2024-06-11 09:43:59.899327] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:29:28.711 [2024-06-11 09:43:59.899340] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:29:28.972 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:28.972 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:29:28.972 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:28.972 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:28.972 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.972 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.972 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:28.972 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.972 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.972 Malloc0 00:29:28.972 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.973 [2024-06-11 09:44:00.634689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.973 [2024-06-11 09:44:00.675074] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1331936 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:28.973 09:44:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:28.973 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.889 09:44:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1331690 00:29:30.889 09:44:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Write completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Write completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Write completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Write completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Write completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Write completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Write completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Write completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Write completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Write completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Write completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Write completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Write completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Read completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 Write completed with error (sct=0, sc=8) 00:29:31.159 starting I/O failed 00:29:31.159 [2024-06-11 09:44:02.708833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:31.159 [2024-06-11 09:44:02.709283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.709305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.709827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.709869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.710170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.710186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.710594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.710642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.710947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.710961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.711320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.711333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.711761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.711803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.712096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.712110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.712602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.712645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.713036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.713051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.713551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.713594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.713975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.713989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.714337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.714351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.714764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.714776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.715036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.715049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.715437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.715449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.715783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.715798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.716186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.159 [2024-06-11 09:44:02.716198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.159 qpair failed and we were unable to recover it. 00:29:31.159 [2024-06-11 09:44:02.716424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.716436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.716822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.716834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.717042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.717056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.717306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.717322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.717669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.717682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.718058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.718070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.718414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.718426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.718757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.718769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.719019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.719031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.719399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.719411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.719791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.719803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.720091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.720104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.720496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.720508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.720880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.720891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.721283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.721295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.721639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.721649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.721872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.721882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.722236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.722247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.722600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.722611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.722928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.722940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.723342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.723354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.723707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.723718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.724005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.724015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.724203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.724215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.724561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.724573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.724951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.724965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.725310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.725325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.725463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.725474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.725753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.725765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.726118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.726129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.726496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.726507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.726736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.726748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.727084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.160 [2024-06-11 09:44:02.727095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.160 qpair failed and we were unable to recover it. 00:29:31.160 [2024-06-11 09:44:02.727354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.727364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.727676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.727687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.728039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.728050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.728438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.728449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.728759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.728770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.729120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.729132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.729499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.729511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.729901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.729913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.730295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.730306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.730677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.730688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.730997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.731011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.731313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.731331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.731725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.731738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.732118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.732131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.732612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.732660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.733054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.733072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.733390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.733404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.733802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.733815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.734130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.734143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.734500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.734514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.734814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.734828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.735185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.735198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.735563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.735577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.735956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.735969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.736368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.736382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.736584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.736598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.736918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.736931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.737379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.737393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.737742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.737755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.738077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.738090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.738461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.738475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.738811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.161 [2024-06-11 09:44:02.738825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.161 qpair failed and we were unable to recover it. 00:29:31.161 [2024-06-11 09:44:02.739135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.739152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.739531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.739544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.739930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.739943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.740284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.740296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.740508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.740523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.740877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.740891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.741275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.741288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.741621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.741634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.741991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.742005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.742420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.742433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.742807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.742821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.743202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.743215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.743581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.743598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.743981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.743997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.744374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.744393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.744771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.744789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.745209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.745226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.745630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.745647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.746030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.746047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.746401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.746418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.746789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.746806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.747164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.747181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.747509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.747527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.747872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.747891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.748256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.748272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.748595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.748614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.748949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.748966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.749372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.749391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.749743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.749760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.750148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.750165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.750540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.750558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.750906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.750923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.751234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.751252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.751480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.751498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.162 qpair failed and we were unable to recover it. 00:29:31.162 [2024-06-11 09:44:02.751847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.162 [2024-06-11 09:44:02.751864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.752231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.752248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.752617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.752635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.752987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.753006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.753327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.753345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.753683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.753700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.753889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.753909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.754276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.754296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.754699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.754721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.754959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.754982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.755375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.755397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.755786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.755807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.756204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.756225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.756598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.756619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.756984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.757006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.757406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.757429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.757872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.757893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.758270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.758290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.758704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.758726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.759108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.759129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.759546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.759568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.759973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.759995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.760373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.760394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.760778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.760800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.761211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.761232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.761615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.761637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.762041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.762062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.762457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.762479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.762852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.762872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.763206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.763227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.763608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.763629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.764004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.764025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.764376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.764398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.764801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.764823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.765234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.163 [2024-06-11 09:44:02.765255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.163 qpair failed and we were unable to recover it. 00:29:31.163 [2024-06-11 09:44:02.765664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.765695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.766028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.766056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.766435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.766487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.766896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.766925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.767334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.767363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.767752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.767781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.768184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.768212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.768570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.768599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.768986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.769013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.769430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.769459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.769794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.769822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.770206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.770239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.770613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.770642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.771046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.771075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.771543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.771573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.771978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.772007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.772299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.772338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.772738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.772767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.773170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.773198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.773586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.773616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.774011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.774039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.774454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.774483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.774894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.774922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.775324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.775353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.775668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.775696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.776101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.776130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.776524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.776552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.164 [2024-06-11 09:44:02.776953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.164 [2024-06-11 09:44:02.776980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.164 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.777353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.777383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.777773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.777802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.778203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.778231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.778611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.778641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.779038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.779066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.779468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.779498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.779911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.779939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.780235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.780263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.780610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.780639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.781056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.781084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.781349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.781378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.781742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.781770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.782173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.782201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.782605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.782635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.783066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.783094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.783501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.783530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.783923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.783952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.784306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.784342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.784717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.784745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.785140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.785168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.785548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.785577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.785992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.786020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.786419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.786448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.786865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.786899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.787275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.787303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.787733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.787761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.788174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.788203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.788588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.788617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.789010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.789039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.789446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.789475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.789878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.789907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.790252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.790280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.165 [2024-06-11 09:44:02.790692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.165 [2024-06-11 09:44:02.790721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.165 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.791126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.791154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.791557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.791587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.792004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.792033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.792445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.792475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.792885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.792913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.793329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.793359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.793775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.793803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.794241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.794269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.794676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.794705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.795110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.795138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.795540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.795634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.796137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.796173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.796571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.796604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.796895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.796931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.797353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.797384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.797774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.797806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.798078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.798108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.798503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.798534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.798868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.798896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.799329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.799359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.799783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.799811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.800175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.800203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.800615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.800645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.801062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.801091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.801480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.801509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.801904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.801933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.802310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.802348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.802779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.802808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.803212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.803240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.803653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.803682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.804086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.804121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.804511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.804541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.804944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.166 [2024-06-11 09:44:02.804973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.166 qpair failed and we were unable to recover it. 00:29:31.166 [2024-06-11 09:44:02.805381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.805412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.805788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.805816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.806213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.806241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.806663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.806692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.807076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.807105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.807445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.807474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.807889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.807919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.808325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.808355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.808774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.808801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.809208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.809236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.809585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.809615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.810020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.810049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.810342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.810375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.810773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.810801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.811176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.811204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.811595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.811624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.812027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.812055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.812462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.812492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.812907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.812937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.813326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.813356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.813786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.813815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.814218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.814246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.814608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.814637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.815039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.815068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.815477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.815507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.815904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.815932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.816290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.816326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.816720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.816749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.817084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.817114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.817419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.817452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.817875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.817905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.818299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.818339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.818533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.818560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.818996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.819024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.167 [2024-06-11 09:44:02.819419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.167 [2024-06-11 09:44:02.819449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.167 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.819865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.819893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.820333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.820362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.820760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.820790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.821148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.821177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.821596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.821625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.822017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.822047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.822460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.822490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.822901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.822930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.823324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.823353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.823787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.823816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.824108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.824135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.824539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.824569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.824963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.824995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.825400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.825429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.825846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.825875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.826283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.826312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.826729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.826757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.827047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.827077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.827482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.827511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.827914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.827943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.828354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.828384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.828804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.828833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.829239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.829267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.829738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.829768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.830175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.830204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.830594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.830623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.831028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.831058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.831473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.831503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.831919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.831947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.832346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.832382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.832790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.832818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.833229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.833258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.833588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.833618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.168 [2024-06-11 09:44:02.834021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.168 [2024-06-11 09:44:02.834049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.168 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.834460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.834490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.834885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.834914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.835206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.835236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.835646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.835678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.836070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.836099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.836503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.836533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.836813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.836846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.837236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.837264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.837680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.837711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.838118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.838148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.838495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.838524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.838941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.838970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.839324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.839354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.839789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.839817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.840198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.840228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.840635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.840665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.841065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.841094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.841508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.841537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.841917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.841946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.842340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.842371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.842785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.842814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.843229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.843258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.843662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.843693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.844127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.844156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.844526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.844556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.844952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.169 [2024-06-11 09:44:02.844982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.169 qpair failed and we were unable to recover it. 00:29:31.169 [2024-06-11 09:44:02.845387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.845418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.845835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.845864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.846270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.846299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.846590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.846625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.846888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.846919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.847403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.847434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.847847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.847878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.848283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.848313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.848712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.848741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.849155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.849193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.849568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.849600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.849994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.850024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.850439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.850469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.850840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.850869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.851277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.851305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.851704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.851734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.852090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.852119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.852529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.852559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.852854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.852882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.853258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.853286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.853732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.853762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.854172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.854201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.854600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.854631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.855015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.855045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.855411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.855441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.855855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.855885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.856281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.856311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.856727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.856757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.857163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.857194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.857606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.857636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.858049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.858078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.858486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.858516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.858912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.858941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.859132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.859160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.859594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.859623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.860106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.170 [2024-06-11 09:44:02.860135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.170 qpair failed and we were unable to recover it. 00:29:31.170 [2024-06-11 09:44:02.860549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.860579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.860998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.861026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.861303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.861347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.861782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.861812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.862221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.862251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.862639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.862669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.863078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.863107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.863513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.863542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.863892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.863921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.864269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.864299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.864711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.864741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.865150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.865182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.865598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.865627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.866038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.866073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.866426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.866457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.866864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.866893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.867300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.867337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.867743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.867772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.868183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.868213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.868557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.868588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.869073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.869103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.869399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.869432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.869832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.869862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.870263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.870292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.870695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.870725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.871001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.871031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.871455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.871485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.871937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.871967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.872376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.872406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.872853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.872883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.873290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.873327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.873681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.873710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.874105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.874134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.874557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.874586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.171 [2024-06-11 09:44:02.874998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.171 [2024-06-11 09:44:02.875027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.171 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.875311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.875348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.875780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.875809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.876183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.876212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.876696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.876726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.877140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.877170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.877589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.877619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.878014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.878045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.878458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.878488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.878781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.878815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.879097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.879129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.879602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.879632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.880043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.880073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.880487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.880518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.880930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.880959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.881370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.881400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.881845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.881874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.882300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.882338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.882767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.882796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.883193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.883228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.883614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.883644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.884057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.884085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.884487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.884518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.884903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.884933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.885351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.885381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.885783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.885812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.886240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.886269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.886670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.886701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.887103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.887133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.887526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.887556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.887965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.887994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.888296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.888340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.888726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.888755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.889165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.889194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.889637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.889668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.890077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.172 [2024-06-11 09:44:02.890107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.172 qpair failed and we were unable to recover it. 00:29:31.172 [2024-06-11 09:44:02.890515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.890544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.890946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.890975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.891338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.891369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.891760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.891791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.892191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.892220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.892624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.892654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.893017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.893046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.893459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.893489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.893899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.893928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.894342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.894371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.894824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.894853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.895264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.895294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.895710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.895742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.896142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.896173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.896622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.896653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.897062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.897091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.897380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.897413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.897743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.897774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.898151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.898183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.898591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.898621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.899039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.899069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.899491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.899521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.899956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.899985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.900368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.900412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.900834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.900865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.901273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.901303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.901726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.901756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.902176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.902206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.902589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.902620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.903027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.903056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.903486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.903517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.903925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.903956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.904427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.904458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.904872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.904901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.173 [2024-06-11 09:44:02.905301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.173 [2024-06-11 09:44:02.905338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.173 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.905743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.905773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.906194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.906225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.906705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.906737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.907087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.907116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.907545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.907574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.907869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.907902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.908335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.908366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.908715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.908744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.909153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.909182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.909629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.909659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.910079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.910107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.910511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.910541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.910816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.910846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.911265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.911295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.911708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.911740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.912155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.912186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.912594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.912626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.912922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.912953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.913362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.913392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.913813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.913843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.914255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.914283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.914712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.914743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.915153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.174 [2024-06-11 09:44:02.915182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.174 qpair failed and we were unable to recover it. 00:29:31.174 [2024-06-11 09:44:02.915599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.915629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.915985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.916015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.916447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.916479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.916880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.916910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.917324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.917356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.917788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.917824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.918179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.918208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.918593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.918622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.919038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.919067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.919479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.919510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.919925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.919954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.920369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.920399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.920760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.920789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.921149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.921180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.921588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.921617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.922020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.922048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.922474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.922504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.922932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.922961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.923365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.923396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.923820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.923850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.924262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.924291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.924651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.924682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.925032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.925063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.925474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.925504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.925751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.925782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.926200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.926229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.926519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.926551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.926970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.926999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.927426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.927457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.927874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.927903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.928309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.928347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.928753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.928783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.929201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.929231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.929637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.929668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.930083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.175 [2024-06-11 09:44:02.930112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.175 qpair failed and we were unable to recover it. 00:29:31.175 [2024-06-11 09:44:02.930412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.930441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.930850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.930879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.931286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.931333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.931745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.931776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.932180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.932210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.932505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.932536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.932959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.932990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.933393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.933422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.933870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.933899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.934311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.934348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.934745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.934780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.935191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.935221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.935612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.935644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.936049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.936080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.936373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.936407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.936756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.936788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.937194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.937223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.937613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.937644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.938057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.938087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.938499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.938529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.938945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.938975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.939375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.939406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.939714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.939748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.940181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.940210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.940639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.940670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.940941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.940971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.941334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.941365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.941802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.941831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.942237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.942266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.942698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.942728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.943138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.943167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.943588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.943618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.943909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.943941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.944351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.944383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.944782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.944812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.176 [2024-06-11 09:44:02.945226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.176 [2024-06-11 09:44:02.945255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.176 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.945713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.945744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.946150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.946180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.946606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.946637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.947090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.947119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.947580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.947610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.947990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.948022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.948435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.948465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.948904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.948934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.949340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.949371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.949802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.949832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.950231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.950262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.950676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.950707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.951120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.951152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.951518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.951547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.951994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.952030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.952448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.952479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.952881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.952911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.953362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.953392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.953815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.953843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.954255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.954285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.954597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.954631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.955050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.955080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.955491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.955521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.955940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.955969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.956347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.956378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.956612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.956643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.957067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.957098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.957517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.957549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.957948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.957979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.958397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.958427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.960892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.960965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.961495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.961532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.961965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.961998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.962412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.962443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.962867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.962897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.963328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.177 [2024-06-11 09:44:02.963360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.177 qpair failed and we were unable to recover it. 00:29:31.177 [2024-06-11 09:44:02.963794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.178 [2024-06-11 09:44:02.963824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.178 qpair failed and we were unable to recover it. 00:29:31.178 [2024-06-11 09:44:02.964252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.178 [2024-06-11 09:44:02.964284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.178 qpair failed and we were unable to recover it. 00:29:31.178 [2024-06-11 09:44:02.964789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.178 [2024-06-11 09:44:02.964821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.178 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.965255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.965291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.965731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.965762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.968097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.968168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.968659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.968697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.969117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.969148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.969573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.969604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.970032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.970062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.970422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.970451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.970900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.970930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.971322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.971353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.971802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.971832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.972239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.972269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.972699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.972731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.973123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.973153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.973586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.973620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.974036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.974076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.974584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.974689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.975219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.975257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.975684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.449 [2024-06-11 09:44:02.975717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.449 qpair failed and we were unable to recover it. 00:29:31.449 [2024-06-11 09:44:02.976067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.976100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.976420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.976459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.976885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.976916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.977332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.977364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.977709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.977738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.978138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.978167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.978554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.978584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.979008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.979037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.979462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.979493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.979896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.979929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.980357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.980388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.980763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.980793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.981090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.981121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.981534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.981563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.981981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.982011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.982427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.982458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.982897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.982926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.983211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.983240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.983645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.983675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.983973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.984005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.984422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.984454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.984733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.984765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.985066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.985095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.985548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.985580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.985990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.986020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.986490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.986520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.986936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.986967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.987382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.987413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.987838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.987868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.988282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.988310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.988735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.988765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.450 qpair failed and we were unable to recover it. 00:29:31.450 [2024-06-11 09:44:02.989187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.450 [2024-06-11 09:44:02.989217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.989654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.989685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.990094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.990123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.990550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.990581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.990993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.991025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.991450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.991486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.991770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.991803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.992235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.992264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.992704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.992736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.993152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.993183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.993594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.993625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.994026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.994055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.994475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.994506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.994899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.994927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.995355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.995385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.995825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.995855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.996273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.996304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.996707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.996738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.997165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.997195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.997602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.997634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.998034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.998078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.998407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.998439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.998866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.998896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.999323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.999354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:02.999774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:02.999806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:03.000224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:03.000253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:03.000658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:03.000689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:03.001093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:03.001123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:03.001543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:03.001574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:03.001987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:03.002018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:03.002327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:03.002357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:03.002784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:03.002815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:03.003227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:03.003258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:03.003689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:03.003719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:03.004138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:03.004169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:03.004599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:03.004628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:03.004952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:03.004981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:03.005346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:03.005377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.451 qpair failed and we were unable to recover it. 00:29:31.451 [2024-06-11 09:44:03.005835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.451 [2024-06-11 09:44:03.005864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.007838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.007900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.008372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.008407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.008834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.008867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.009249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.009283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.009725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.009757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.010155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.010187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.010675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.010716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.011120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.011154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.011550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.011582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.011967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.011996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.012415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.012447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.012712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.012745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.013163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.013193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.013576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.013607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.014022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.014052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.014488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.014519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.014974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.015003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.015430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.015465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.015912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.015942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.016366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.016398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.016808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.016843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.017266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.017296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.019147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.019210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.019699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.019736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.020048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.020087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.020492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.020524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.020937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.020967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.021390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.021421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.021879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.021909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.022339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.022370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.022736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.022768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.023174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.023205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.023503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.023539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.452 qpair failed and we were unable to recover it. 00:29:31.452 [2024-06-11 09:44:03.023959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.452 [2024-06-11 09:44:03.023990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.024408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.024441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.024865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.024895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.025293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.025334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.025746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.025776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.026189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.026219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.026647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.026678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.026985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.027015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.027403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.027434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.027846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.027879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.028298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.028339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.028770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.028803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.029210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.029239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.029655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.029693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.030111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.030142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.030539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.030570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.030990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.031021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.031430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.031461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.031797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.031828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.032245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.032275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.032702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.032733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.033185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.033215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.033652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.033684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.034075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.034105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.034513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.034544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.034939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.034969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.035275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.035305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.035730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.035761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.036072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.036103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.036400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.036431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.036821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.036854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.037247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.037277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.037737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.037768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.038056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.038088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.038430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.038469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.038888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.038917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.039329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.039360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.039823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.453 [2024-06-11 09:44:03.039853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.453 qpair failed and we were unable to recover it. 00:29:31.453 [2024-06-11 09:44:03.040227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.040256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.040660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.040692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.041061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.041091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.041501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.041532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.041954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.041983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.042381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.042412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.042806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.042835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.043243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.043274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.043745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.043777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.044174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.044203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.044603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.044633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.045042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.045071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.045498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.045527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.046003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.046033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.046435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.046466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.046884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.046914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.047302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.047339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.047683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.047714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.048167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.048200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.048621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.048651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.049071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.049100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.049474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.049504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.049898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.049929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.050346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.050376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.050792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.050823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.051309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.051382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.051750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.051780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.052187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.052216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.052601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.052631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.053037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.053066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.053497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.454 [2024-06-11 09:44:03.053527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.454 qpair failed and we were unable to recover it. 00:29:31.454 [2024-06-11 09:44:03.053945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.053974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.054284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.054329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.054646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.054678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.055117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.055147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.055602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.055633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.056058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.056087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.056497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.056527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.056940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.056971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.057381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.057411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.057813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.057843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.058249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.058277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.058706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.058743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.059155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.059185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.059580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.059611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.059976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.060006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.060411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.060441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.060859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.060889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.061268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.061299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.061737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.061769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.062139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.062169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.062598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.062629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.063043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.063074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.063482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.063513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.065397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.065460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.065905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.065940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.066374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.066406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.066776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.066805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.067219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.067249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.067695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.067731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.068149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.068181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.068602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.068633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.069049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.069080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.069494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.069525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.069945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.069975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.070376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.070407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.070851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.070885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.071290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.071330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.455 [2024-06-11 09:44:03.071726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.455 [2024-06-11 09:44:03.071756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.455 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.072044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.072075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.072494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.072527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.072973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.073004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.073408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.073442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.073842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.073873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.074205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.074235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.074658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.074689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.075092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.075124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.075542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.075575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.075968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.076000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.076404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.076435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.078123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.078183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.078530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.078570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.079021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.079060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.079459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.079489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.079926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.079956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.080431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.080462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.080905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.080935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.081349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.081380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.081805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.081835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.082242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.082273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.082547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.082581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.083018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.083053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.083481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.083513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.083918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.083947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.084387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.084420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.084663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.084694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.085141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.085171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.085553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.085584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.085876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.085907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.086337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.086368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.086796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.086826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.087191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.087221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.087545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.087574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.087979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.088010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.088332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.088365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.088784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.088814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.089225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.456 [2024-06-11 09:44:03.089256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.456 qpair failed and we were unable to recover it. 00:29:31.456 [2024-06-11 09:44:03.089675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.089706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.090109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.090140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.090600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.090630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.091098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.091129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.091690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.091797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.092359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.092400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.092755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.092787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.093182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.093212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.093522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.093552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.093968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.093998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.094408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.094439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.094844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.094874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.095288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.095328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.095758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.095789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.096211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.096241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.096658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.096700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.097085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.097116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.097571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.097602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.098013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.098044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.098494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.098528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.098989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.099020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.099398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.099428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.099856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.099886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.100301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.100371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.100802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.100834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.101264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.101294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.101758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.101790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.102193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.102224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.102653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.102686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.103122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.103153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.103636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.103742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.104265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.104303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.104700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.104732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.105149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.105180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.105598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.105630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.105918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.105950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.106245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.106276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.106628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.457 [2024-06-11 09:44:03.106663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.457 qpair failed and we were unable to recover it. 00:29:31.457 [2024-06-11 09:44:03.107092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.107122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.107421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.107453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.107866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.107895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.108205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.108239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.108680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.108713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.109110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.109139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.109567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.109598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.110011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.110041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.110448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.110479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.110886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.110917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.111338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.111370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.111752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.111783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.114274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.114362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.114806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.114842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.116616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.116676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.117160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.117196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.117624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.117656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.118074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.118114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.118440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.118474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.118915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.118946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.119389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.119421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.119842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.119873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.120283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.120313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.120742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.120772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.121199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.121229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.121666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.121697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.122124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.122153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.122588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.122617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.123025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.123057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.123477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.123508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.123914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.123946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.124365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.124398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.458 qpair failed and we were unable to recover it. 00:29:31.458 [2024-06-11 09:44:03.124764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.458 [2024-06-11 09:44:03.124796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.125210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.125241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.125659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.125691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.126141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.126173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.126597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.126630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.127050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.127079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.127492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.127523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.127944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.127975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.128384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.128416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.130138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.130197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.130638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.130671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.131088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.131119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.131523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.131555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.131976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.132007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.132438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.132472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.132885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.132915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.133275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.133306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.133736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.133766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.134166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.134197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.134603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.134635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.135003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.135033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.135448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.135479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.135787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.135821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.136214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.136243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.136652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.136682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.137121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.137157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.137594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.137626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.138036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.138066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.138478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.138509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.138933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.138962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.139345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.139375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.139840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.139871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.140278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.140308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.142600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.142669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.143167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.143203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.143612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.143645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.143957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.143997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.144430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.459 [2024-06-11 09:44:03.144459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.459 qpair failed and we were unable to recover it. 00:29:31.459 [2024-06-11 09:44:03.144872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.144903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.145327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.145359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.147196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.147253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.147705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.147738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.148099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.148129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.148550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.148587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.149069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.149099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.149541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.149573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.149855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.149888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.150341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.150373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.150721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.150753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.151167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.151197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.151673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.151703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.152123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.152153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.152562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.152593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.153019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.153049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.153492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.153522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.153945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.153975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.154379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.154411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.154835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.154866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.155280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.155310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.155780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.155811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.156221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.156252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.156710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.156742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.157144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.157173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.157605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.157636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.158039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.158070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.158492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.158530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.158954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.158985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.159278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.159311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.159702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.159732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.160147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.160177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.160619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.160654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.160946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.160976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.161384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.161415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.161843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.161875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.162328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.162359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.162806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.460 [2024-06-11 09:44:03.162836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.460 qpair failed and we were unable to recover it. 00:29:31.460 [2024-06-11 09:44:03.163249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.163279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.163722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.163754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.164174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.164204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.164603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.164635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.165036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.165066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.165495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.165526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.165956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.165986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.166321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.166353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.166814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.166844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.167304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.167342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.167773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.167803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.168109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.168139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.168655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.168759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.169288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.169344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.169783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.169813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.170080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.170109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.170589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.170695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.171204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.171242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.171452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.171486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.171935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.171965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.172328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.172361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.172790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.172820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.173250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.173280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.173724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.173755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.174151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.174182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.174734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.174764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.175182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.175211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.175618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.175650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.176068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.176099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.176385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.176428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.176831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.176861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.177156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.177193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.177615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.177647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.178069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.178101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.178520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.178551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.178966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.178996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.179406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.179436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.179884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.179915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.461 qpair failed and we were unable to recover it. 00:29:31.461 [2024-06-11 09:44:03.180337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-06-11 09:44:03.180370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.180829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.180859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.181265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.181294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.181702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.181734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.182146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.182177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.182579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.182610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.183015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.183046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.183423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.183454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.183774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.183804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.184232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.184262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.184736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.184767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.185030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.185059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.185417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.185448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.185893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.185928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.186336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.186368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.186824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.186855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.187285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.187326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.187763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.187793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.188198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.188230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.188708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.188741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.189147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.189177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.189605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.189635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.190050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.190080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.190511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.190542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.190831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.190865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.191296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.191334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.191836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.191867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.192293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.192345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.192787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.192817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.193120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.193151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.193560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.193591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.194039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.194074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.194605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.194710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.195191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.195229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.195665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.195697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.196114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.196144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.196599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.196629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.197054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.197085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.462 qpair failed and we were unable to recover it. 00:29:31.462 [2024-06-11 09:44:03.197400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-06-11 09:44:03.197436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.197895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.197925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.198327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.198362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.198779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.198809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.199208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.199237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.199691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.199723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.200125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.200154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.200480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.200513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.200927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.200957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.201311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.201355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.201810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.201841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.202281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.202311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.202760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.202790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.203163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.203192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.203606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.203638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.204046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.204075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.204479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.204510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.204947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.204978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.205404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.205435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.205775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.205806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.206228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.206259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.206713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.206745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.207161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.207192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.207467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.207497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.207900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.207931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.208351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.208382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.208638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.208668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.209089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.209118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.209538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.209569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.209992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.210021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.210438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.210468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.210869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.210900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.211383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-06-11 09:44:03.211413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.463 qpair failed and we were unable to recover it. 00:29:31.463 [2024-06-11 09:44:03.211853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.211890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.212302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.212340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.212747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.212777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.213184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.213213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.213676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.213707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.214116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.214145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.214553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.214583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.214992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.215024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.215368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.215400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.215791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.215821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.216241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.216272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.216655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.216687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.217079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.217110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.217501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.217531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.217968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.217999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.218334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.218365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.218766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.218795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.219217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.219245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.219690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.219721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.220171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.220200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.220703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.220733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.221148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.221178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.221604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.221634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.222034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.222063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.222385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.222419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.222805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.222836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.223130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.223163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.223596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.223627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.224039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.224069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.224427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.224459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.224839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.224868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.225175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.225205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.225614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.225643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.226138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.226167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.226464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.226497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.226894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.226924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.227353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.227384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.227843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-06-11 09:44:03.227873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.464 qpair failed and we were unable to recover it. 00:29:31.464 [2024-06-11 09:44:03.228247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.228276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.228724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.228754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.229160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.229208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.229644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.229675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.230095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.230125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.230606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.230636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.230930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.230958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.231328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.231358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.231827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.231856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.232265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.232295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.232749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.232778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.233136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.233166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.233524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.233554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.233898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.233929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.234232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.234260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.234690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.234721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.235065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.235095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.235416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.235446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.235868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.235896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.236340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.236371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.236791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.236820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.237138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.237168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.237647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.237677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.238107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.238136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.238552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.238583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.238992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.239020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.239470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.239501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.239935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.239964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.240399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.240429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.240863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.240893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.241095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.241124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.241455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.241485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.241913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.241944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.242352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.242383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.242847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.242877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.243296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.243339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.243765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.243795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.244292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.465 [2024-06-11 09:44:03.244331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.465 qpair failed and we were unable to recover it. 00:29:31.465 [2024-06-11 09:44:03.244761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.244792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.245210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.245239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.245651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.245682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.246038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.246070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.246549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.246583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.247006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.247036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.247477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.247508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.247916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.247946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.248344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.248376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.248772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.248803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.249219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.249249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.249688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.249721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.250118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.250150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.250567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.250671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.250934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.250971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.251296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.251342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.251667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.251698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.252009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.252038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.252464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.252497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.252873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.252905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.253332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.253363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.253689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.253722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.466 [2024-06-11 09:44:03.254218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.466 [2024-06-11 09:44:03.254249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.466 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.254698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.254732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.255168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.255198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.255679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.255710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.256115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.256147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.256454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.256484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.256890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.256920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.257192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.257221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.257715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.257747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.258206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.258237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.258610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.258641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.259131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.259162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.259443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.259472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.259930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.259960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.260269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.260300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.260694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.260723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.261193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.261223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.261540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.261578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.262073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.262103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.262521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.262551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.263031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.263061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.263457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.263487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.263884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.263921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.264207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.264238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.264664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.264694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.265102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.265133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.265535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.265565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.265962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.265992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.266376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.266407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.266759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.266791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.267094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.267124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.267434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.737 [2024-06-11 09:44:03.267464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.737 qpair failed and we were unable to recover it. 00:29:31.737 [2024-06-11 09:44:03.267900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.267928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.268345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.268375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.268812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.268842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.269256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.269286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.269789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.269820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.270313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.270354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.270813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.270843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.271260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.271290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.271749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.271779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.272196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.272227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.272756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.272787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.273217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.273248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.273746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.273776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.274206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.274237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.274674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.274704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.275016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.275047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.275515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.275545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.275964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.275994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.276441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.276472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.276909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.276939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.277379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.277410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.277812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.277842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.278261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.278291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.278771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.278803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.279214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.279244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.279737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.279769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.280191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.280222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.280582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.280615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.281040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.281071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.281517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.281547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.281975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.282011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.282440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.282470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.282889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.282920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.283331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.283362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.283803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.283832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.284249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.284280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.284639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.738 [2024-06-11 09:44:03.284671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.738 qpair failed and we were unable to recover it. 00:29:31.738 [2024-06-11 09:44:03.285085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.285117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.285530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.285561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.285979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.286008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.286422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.286475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.286928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.286957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.287378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.287408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.287776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.287807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.288238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.288268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.289994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.290060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.290521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.290559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.290997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.291028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.291466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.291497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.291854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.291883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.292324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.292356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.294113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.294168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.294610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.294646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.295130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.295164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.295555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.295600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.296056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.296103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.296469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.296526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.296987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.297055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.297463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.297514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.297975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.298014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.298357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.298390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.298821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.298854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.299278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.299309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.299798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.299830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.300269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.300300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.300806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.300838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.301294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.301338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.301795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.301824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.302222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.302252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.302737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.302773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.303162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.303195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.303638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.303669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.304132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.304163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.304664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.304694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.305023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.305057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.739 qpair failed and we were unable to recover it. 00:29:31.739 [2024-06-11 09:44:03.305393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.739 [2024-06-11 09:44:03.305427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.305891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.305921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.306174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.306202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.306418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.306449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.306856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.306887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.307312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.307360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.307787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.307817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.308121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.308153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.308510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.308539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.308953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.308984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.309522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.309553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.309961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.309991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.310409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.310442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.310889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.310919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.311330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.311361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.311800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.311831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.312242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.312272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.312715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.312745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.313038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.313069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.313501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.313532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.313941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.313970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.314410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.314441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.314873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.314910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.315335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.315365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.315832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.315861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.316357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.316388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.316812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.316841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.317266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.317296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.317716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.317746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.318077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.318108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.318645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.318751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.319292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.319346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.319764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.319795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.320172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.320201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.320616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.320647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.321069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.321099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.321482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.321514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.740 [2024-06-11 09:44:03.321954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.740 [2024-06-11 09:44:03.321985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.740 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.322409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.322442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.322768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.322798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.323217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.323247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.323688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.323718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.324089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.324118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.324444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.324474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.324879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.324907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.325345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.325375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.325740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.325771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.326181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.326210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.326624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.326655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.327053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.327083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.327441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.327472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.327898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.327928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.328340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.328370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.328783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.328813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.329098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.329131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.329554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.329585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.329998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.330028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.330462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.330494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.330911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.330941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.331368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.331399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.331831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.331863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.332262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.332292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.332693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.332731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.333025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.333061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.333487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.333517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.741 [2024-06-11 09:44:03.333938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.741 [2024-06-11 09:44:03.333968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.741 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.334389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.334421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.334864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.334894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.335310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.335350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.335793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.335823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.336238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.336267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.336671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.336704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.337124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.337154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.337488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.337518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.337943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.337972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.338442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.338474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.338908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.338938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.339365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.339394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.339788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.339819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.340256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.340287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.340736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.340766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.341066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.341096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.341377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.341407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.341855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.341885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.342287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.342332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.342815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.342845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.343270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.343300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.343741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.343773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.344181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.344212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.344622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.344653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.345010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.345041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.345360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.345390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.345803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.345833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.346227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.346257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.346557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.346588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.347024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.347053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.347540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.347570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.347964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.347993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.348284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.348329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.348775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.348808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.349176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.349206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.349559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.349592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.349886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.349926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.742 [2024-06-11 09:44:03.350354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.742 [2024-06-11 09:44:03.350386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.742 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.350834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.350866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.351297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.351335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.351812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.351843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.352269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.352300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.352790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.352821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.353243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.353274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.353674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.353706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.354133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.354163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.354521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.354553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.354973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.355004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.355408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.355438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.355862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.355893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.356328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.356361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.356845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.356875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.357282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.357321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.357786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.357818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.358231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.358261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.358687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.358720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.359165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.359195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.359620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.359651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.360062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.360092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.360409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.360440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.360872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.360902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.361299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.361356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.361786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.361816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.362251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.362284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.362705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.362739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.363160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.363192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.363607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.363639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.364029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.364064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.364468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.364499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.364930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.364960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.365244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.365273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.365579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.365612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.366042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.366073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.366506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.366538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.743 [2024-06-11 09:44:03.366855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.743 [2024-06-11 09:44:03.366887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.743 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.367291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.367344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.367808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.367845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.368253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.368284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.368715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.368746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.369166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.369197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.369558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.369590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.370001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.370031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.370437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.370467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.370903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.370934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.371390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.371422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.371856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.371888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.372304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.372344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.372774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.372804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.373172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.373203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.373606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.373637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.374060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.374090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.374496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.374526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.374945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.374975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.375467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.375498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.375910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.375941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.376382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.376414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.376838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.376868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.377260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.377290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.377616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.377649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.378035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.378064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.378463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.378494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.378749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.378779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.379152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.379181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.379494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.379527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.379954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.379983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.380424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.380454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.380870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.380902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.381306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.381348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.381767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.381798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.382213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.382244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.382568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.382599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.744 [2024-06-11 09:44:03.382986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.744 [2024-06-11 09:44:03.383016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.744 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.383432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.383464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.383888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.383920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.384217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.384247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.384503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.384534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.384941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.384977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.385364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.385394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.385804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.385835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.386258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.386290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.386605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.386636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.387045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.387077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.387499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.387530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.387961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.387992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.388392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.388425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.388868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.388898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.389306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.389344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.389739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.389769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.390143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.390173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.390492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.390521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.390925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.390955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.391385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.391416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.391795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.391825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.392231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.392261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.392625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.392660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.392968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.392998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.393404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.393436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.393851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.393882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.394307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.394348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.394776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.394806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.395212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.745 [2024-06-11 09:44:03.395241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.745 qpair failed and we were unable to recover it. 00:29:31.745 [2024-06-11 09:44:03.395675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.395707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.396171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.396202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.396626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.396659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.397050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.397080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.397498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.397530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.397940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.397970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.398282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.398322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.398581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.398614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.398924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.398958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.399372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.399404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.399834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.399864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.400278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.400309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.400704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.400735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.401038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.401069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.401424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.401454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.401866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.401903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.402305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.402343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.402732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.402762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.403229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.403260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.403706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.403736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.404164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.404195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.404620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.404651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.405078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.405107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.405411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.405441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.746 qpair failed and we were unable to recover it. 00:29:31.746 [2024-06-11 09:44:03.405881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.746 [2024-06-11 09:44:03.405911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-06-11 09:44:03.406214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-06-11 09:44:03.406248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-06-11 09:44:03.406660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-06-11 09:44:03.406689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-06-11 09:44:03.407104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-06-11 09:44:03.407134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-06-11 09:44:03.407598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-06-11 09:44:03.407628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-06-11 09:44:03.408049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-06-11 09:44:03.408078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-06-11 09:44:03.408384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-06-11 09:44:03.408413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-06-11 09:44:03.408836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-06-11 09:44:03.408866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-06-11 09:44:03.409298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-06-11 09:44:03.409335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-06-11 09:44:03.409697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-06-11 09:44:03.409726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-06-11 09:44:03.410122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-06-11 09:44:03.410152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-06-11 09:44:03.410606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-06-11 09:44:03.410637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-06-11 09:44:03.410925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-06-11 09:44:03.410954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.747 [2024-06-11 09:44:03.411391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.747 [2024-06-11 09:44:03.411420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.747 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.411881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.411911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.412281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.412313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.412745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.412775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.413201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.413231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.413521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.413555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.413990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.414019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.414462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.414492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.414902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.414931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.415337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.415369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.415805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.415835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.416254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.416283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.416705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.416735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.417134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.417164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.417515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.417546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.417981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.418011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.418429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.418460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.418926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.418955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.419359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.419395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.419794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.419824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.420236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.420266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.420633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.420665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.421078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.421108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.421427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.421457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.421879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.421909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.422336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.422368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.422797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.422827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.423246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.423276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.423712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.423742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.424161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.424191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.424649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.424680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.425100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.425130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.425582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.425613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.425997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.426027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.426441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.426473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.426918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.426948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.427309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.427349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.427759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.427788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.748 [2024-06-11 09:44:03.428203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-06-11 09:44:03.428232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.748 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.428635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.428666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.429077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.429107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.429437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.429469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.429851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.429881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.430262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.430292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.430735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.430765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.431061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.431096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.431520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.431550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.431973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.432003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.432420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.432453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.432918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.432948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.433257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.433286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.433776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.433806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.434143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.434174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.434617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.434648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.435058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.435087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.435402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.435433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.435877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.435907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.436346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.436377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.436890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.436926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.437216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.437249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.437686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.437716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.438149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.438178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.438542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.438574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.438931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.438962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.439275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.439305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.439847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.439877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.749 [2024-06-11 09:44:03.440286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-06-11 09:44:03.440328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.749 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.440557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.440586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.440965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.440995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.441407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.441438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.441849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.441879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.442346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.442377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.442806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.442836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.443134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.443162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.443484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.443513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.443952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.443981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.444429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.444458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.444893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.444923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.445330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.445360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.445813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.445843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.446214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.446243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.446639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.446670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.447071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.447101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.447522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.447552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.447970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.448000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.448394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.448426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.448756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.448790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.449211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.449243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.449644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.449676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.450136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.450167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.450568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.450597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.451013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.451042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.451458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.451488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.451942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.451971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.452404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.452434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.452859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.452889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.453186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.453217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.453623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.453655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.453966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.454002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.454420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.454452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.454874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.454904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.455329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.455359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.455778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.455807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.456223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.456252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.750 [2024-06-11 09:44:03.456554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-06-11 09:44:03.456587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.750 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.457025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.457054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.457537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.457567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.458023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.458052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.458456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.458486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.458832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.458861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.459265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.459294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.459752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.459782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.460195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.460226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.460621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.460652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.461068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.461097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.461495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.461525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.461938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.461969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.462400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.462430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.462712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.462743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.463164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.463193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.463646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.463675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.464091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.464121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.464523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.464552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.464873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.464902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.465206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.465234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.465657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.465687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.466102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.466131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.466542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.466573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.466986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.467015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.467431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.467461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.467871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.467902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.468358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.468389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.468825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.468855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.469282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.469310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.469747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.469776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.751 qpair failed and we were unable to recover it. 00:29:31.751 [2024-06-11 09:44:03.470190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.751 [2024-06-11 09:44:03.470220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.470618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.470648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.471047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.471078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.471487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.471523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.471828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.471858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.472275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.472304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.472745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.472775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.473186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.473216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.473655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.473685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.474116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.474146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.474542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.474575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.474941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.474972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.475363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.475394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.475824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.475854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.476210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.476242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.476645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.476675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.477076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.477106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.477465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.477497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.477900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.477930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.478346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.478377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.478783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.478812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.479176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.479208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.479625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.479655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.480028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.480058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.480502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.480533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.480947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.480976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.752 qpair failed and we were unable to recover it. 00:29:31.752 [2024-06-11 09:44:03.481392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.752 [2024-06-11 09:44:03.481422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.481847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.481877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.482231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.482260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.482686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.482718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.483126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.483158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.483575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.483606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.483896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.483924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.484327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.484355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.484673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.484704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.485115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.485145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.485456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.485485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.485899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.485928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.486347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.486378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.486788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.486819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.487271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.487301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.487627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.487659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.488067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.488097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.488533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.488570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.488864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.488893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.489329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.489358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.489762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.489792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.490240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.490270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.490612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.490641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.491049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.491079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.491481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.491511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.491933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.491964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.492366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.492397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.492845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.492876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.493259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.493289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.493732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.493762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.494193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.494222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.494639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.494672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.495089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.495118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.495472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.495503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.495947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.495976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.496402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.496433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.496816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.753 [2024-06-11 09:44:03.496845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.753 qpair failed and we were unable to recover it. 00:29:31.753 [2024-06-11 09:44:03.497268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.497298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.497701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.497733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.498138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.498168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.498464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.498493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.498864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.498894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.499336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.499367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.499804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.499834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.500264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.500294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.500649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.500680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.501075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.501104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.501390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.501420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.501842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.501872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.502282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.502311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.502708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.502739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.503062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.503096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.503421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.503451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.503762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.503794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.504148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.504177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.504645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.504675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.505098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.505127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.505527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.505563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.505992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.506022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.506494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.506524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.506814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.506847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.507252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.507282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.507713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.507744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.508161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.508191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.508488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.508517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.508941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.508970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.509400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.509431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.509845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.509875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.510337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.510370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.510734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.510764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.511191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.511220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.511634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.511665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.512092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.512121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.512441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.512472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.512891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.754 [2024-06-11 09:44:03.512921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.754 qpair failed and we were unable to recover it. 00:29:31.754 [2024-06-11 09:44:03.513347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.513378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.513816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.513846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.514258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.514287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.514628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.514660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.515081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.515110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.515435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.515467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.515879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.515908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.516334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.516365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.516789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.516821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.517130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.517165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.517573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.517604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.517898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.517934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.518336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.518366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.518690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.518718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.519126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.519156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.519609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.519639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.520053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.520083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.520377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.520407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.520843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.520872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.521296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.521335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.521714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.521743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.522166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.522196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.522611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.522643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.523054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.523086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.523421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.523451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.523869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.523899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.524197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.524227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.524646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.524675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.525093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.525124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.525456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.525487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.525910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.525941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.526364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.526396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.526844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.526874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.527174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.527205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.527563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.527592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.527895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.527929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.528342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.528372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.755 [2024-06-11 09:44:03.528824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.755 [2024-06-11 09:44:03.528854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.755 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.529271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.529301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.529729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.529759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.530057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.530086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.530491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.530522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.530819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.530848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.531272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.531302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.531737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.531767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.532225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.532255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.532690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.532720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.533148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.533177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.533622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.533653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.534082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.534118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.534552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.534584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.535007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.535036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.535346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.535377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.535838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.535868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.536175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.536206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.536584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.536614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.537037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.537068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.537508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.537537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.537909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.537939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.538397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.538429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.538854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.538884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.539328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.539359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.539857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.539889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.540288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.540325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.540714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.540743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.541162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.541191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.541621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.541651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:31.756 [2024-06-11 09:44:03.542080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.756 [2024-06-11 09:44:03.542110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:31.756 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.542551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.542584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.542901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.542935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.543353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.543385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.543816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.543845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.544279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.544311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.544852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.544884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.545305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.545345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.545767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.545796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.546209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.546239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.546561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.546593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.546956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.546986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.547402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.547433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.547809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.547838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.548282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.548311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.548762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.548792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.549265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.549297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.549618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.549650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.550079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.550109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.550432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.550467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.550904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.550933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.551352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.551383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.551703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.551739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.029 [2024-06-11 09:44:03.552161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.029 [2024-06-11 09:44:03.552190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.029 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.552508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.552539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.552986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.553015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.553435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.553466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.553891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.553920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.554345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.554376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.554792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.554822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.555231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.555260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.555660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.555690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.555987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.556018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.556348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.556382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.556882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.556912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.557321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.557352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.557731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.557761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.558172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.558202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.558649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.558681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.559051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.559081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.559512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.559542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.559969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.559999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.560216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.560243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.560552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.560582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.561021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.561053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.561505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.561535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.561983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.562013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.562474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.562505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.562904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.562936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.563253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.563283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.563690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.563722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.564132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.564163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.564604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.564634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.565050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.565079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.565393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.565421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.565845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.565874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.566292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.566331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.030 [2024-06-11 09:44:03.566748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.030 [2024-06-11 09:44:03.566777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.030 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.567224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.567254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.567662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.567695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.568118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.568148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.568577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.568607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.569032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.569067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.569503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.569533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.569910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.569941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.570344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.570375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.570797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.570826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.571236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.571266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.571702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.571735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.572173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.572203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.572616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.572646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.572988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.573017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.573421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.573451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.573887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.573916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.574329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.574359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.574778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.574808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.574988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.575020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.575442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.575471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.575903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.575933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.576351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.576382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.576840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.576869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.577373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.577403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.577820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.577850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.578065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.578096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.578533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.578563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.578873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.578905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.579325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.579356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.579812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.579842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.580240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.580269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.031 qpair failed and we were unable to recover it. 00:29:32.031 [2024-06-11 09:44:03.580694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.031 [2024-06-11 09:44:03.580725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.581134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.581163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.581531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.581562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.582012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.582041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.582428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.582459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.582878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.582909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.583272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.583301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.583659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.583689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.584110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.584139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.584541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.584572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.584858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.584890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.585330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.585361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.585688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.585718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.586138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.586174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.586474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.586512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.586944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.586973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.587385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.587416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.587808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.587837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.588257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.588287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.588741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.588771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.589081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.589112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.589546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.589577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.589989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.590019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.590438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.590469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.590908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.590938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.591334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.591365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.591828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.591857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.592155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.592190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.592639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.032 [2024-06-11 09:44:03.592669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.032 qpair failed and we were unable to recover it. 00:29:32.032 [2024-06-11 09:44:03.593123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.593152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.593612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.593644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.594071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.594101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.594422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.594456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.594888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.594917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.595290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.595328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.595746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.595775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.596085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.596115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.596543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.596574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.596990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.597020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.597454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.597484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.597895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.597925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.598230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.598260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.598490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.598521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.598958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.598989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.599409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.599443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.599870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.599899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.600268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.600297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.600731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.600762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.601105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.601135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.601432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.601462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.601868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.601898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.602366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.602396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.602803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.602834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.603389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.603425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.603734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.603767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.604188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.604217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.604644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.604675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.604973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.605005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.605421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.605451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.605896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.605925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.606351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.033 [2024-06-11 09:44:03.606381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.033 qpair failed and we were unable to recover it. 00:29:32.033 [2024-06-11 09:44:03.606832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.606862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.607280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.607310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.607871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.607901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.608373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.608405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.608833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.608863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.609280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.609310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.609750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.609782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.610084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.610113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.610612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.610642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.611057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.611087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.611504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.611534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.611945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.611976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.612406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.612436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.612899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.612929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.613359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.613391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.613826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.613856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.614280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.614309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.614630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.614665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.615056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.615085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.615562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.615593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.615896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.615928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.616238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.616267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.616758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.616788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.617081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.617112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.617529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.617559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.034 [2024-06-11 09:44:03.617977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.034 [2024-06-11 09:44:03.618007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.034 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.618368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.618399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.618863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.618892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.619353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.619384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.619845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.619877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.620178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.620208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.620510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.620538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.620879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.620914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.621281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.621309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.621730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.621761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.622168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.622198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.622523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.622554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.622841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.622872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.623299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.623342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.623769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.623798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.624080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.624108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.624513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.624543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.624974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.625004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.625430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.625460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.625850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.625879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.626345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.626376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.626805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.626835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.627267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.627297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.627789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.627819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.628233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.628263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.628563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.628600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.629036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.629066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.629557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.629587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.629995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.630025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.630417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.630447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.630853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.630883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.035 [2024-06-11 09:44:03.631308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.035 [2024-06-11 09:44:03.631359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.035 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.631800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.631830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.632254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.632285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.632772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.632803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.633239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.633269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.633722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.633752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.634179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.634209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.634495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.634525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.634861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.634890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.635285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.635321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.635654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.635684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.636081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.636110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.636539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.636569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.636971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.637001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.637392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.637423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.637835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.637864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.638257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.638294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.638690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.638722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.639067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.639097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.639510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.639541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.639965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.639995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.640414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.640444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.640926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.640956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.641351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.641380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.641675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.641708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.642125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.642154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.642587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.642618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.642923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.642955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.643345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.643375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.643790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.036 [2024-06-11 09:44:03.643820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-06-11 09:44:03.644206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.644236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.644544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.644573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.645001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.645030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.645442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.645473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.645801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.645831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.646246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.646276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.646690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.646721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.646989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.647020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.647449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.647480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.647852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.647882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.648333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.648364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.648803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.648832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.649269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.649301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.649736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.649767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.650083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.650113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.650550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.650580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.651013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.651042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.651448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.651479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.651871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.651900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.652385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.652416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.652861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.652891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.653177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.653211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.653638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.653669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.654085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.654115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.654460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.654491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.654902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.654931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.655345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.655383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.655812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.655842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.656256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.656286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.656734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.656765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.657188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.657218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.657662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.657692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-06-11 09:44:03.658113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.037 [2024-06-11 09:44:03.658146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.658475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.658505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.658939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.658970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.659395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.659426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.659872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.659902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.660323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.660353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.660755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.660785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.661196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.661225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.661704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.661735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.662024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.662052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.662490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.662520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.662935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.662965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.663392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.663425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.663862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.663893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.664291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.664328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.664716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.664745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.665182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.665211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.665675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.665705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.666095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.666125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.666544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.666575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.666962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.666993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.667408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.667440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.667902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.667931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.668357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.668388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.668817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.668847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.669271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.669301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.669795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.669825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.670242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.670272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.670679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.670711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.671019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.671053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.671482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.671514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.671920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.671950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.672365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.672395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.672814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.038 [2024-06-11 09:44:03.672844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-06-11 09:44:03.673310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.673373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.673842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.673872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.674294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.674332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.674793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.674823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.675252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.675283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.675740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.675772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.676086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.676117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.676651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.676757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.677251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.677291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.677711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.677743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.678110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.678140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.678567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.678675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.679161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.679198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.679608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.679640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.680077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.680108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.680471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.680504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.680941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.680970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.681274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.681306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.681791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.681821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.682235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.682266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.682601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.682632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.682936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.682966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.683360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.683392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.683686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.683719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.684139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.684169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.684481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.684513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.684923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.684953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.685251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.685282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.685597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.685634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.686035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.686065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.686459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.686490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.039 qpair failed and we were unable to recover it. 00:29:32.039 [2024-06-11 09:44:03.686845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.039 [2024-06-11 09:44:03.686876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.687292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.687337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.687751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.687781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.688194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.688225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.688662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.688693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.689111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.689140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.689433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.689468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.689908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.689938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.690349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.690380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.690790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.690826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.691229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.691260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.691584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.691616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.691945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.691975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.692407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.692438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.692846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.692875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.693331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.693361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.693625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.693653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.694095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.694123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.694516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.694546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.694973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.695002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.695286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.695328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.695741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.695772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.696203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.696238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.040 [2024-06-11 09:44:03.696580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.040 [2024-06-11 09:44:03.696613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.040 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.697030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.697060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.697489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.697519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.697973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.698003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.698417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.698447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.698911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.698940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.699381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.699416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.699856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.699886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.700307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.700349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.700777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.700805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.701094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.701126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.701490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.701521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.701925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.701955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.702387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.702440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.702855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.702885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.703303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.703341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.703751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.703780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.704204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.704234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.704538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.704571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.704982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.705012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.705329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.705360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.705672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.705703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.706127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.706156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.706572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.706602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.707042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.707072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.707557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.707588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.707985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.708021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.708370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.708402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.708851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.708881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.709294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.709330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.709738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.709769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.710174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.710204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.710650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.710680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.711093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.711123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.711443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.711473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.711774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.041 [2024-06-11 09:44:03.711804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.041 qpair failed and we were unable to recover it. 00:29:32.041 [2024-06-11 09:44:03.712242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.712271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.712690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.712721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.713152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.713182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.713661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.713691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.714115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.714146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.714439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.714472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.714903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.714933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.715360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.715391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.715814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.715845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.716260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.716291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.716701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.716734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.717129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.717161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.717572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.717603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.718016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.718047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.718354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.718386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.718805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.718836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.719129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.719159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.719594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.719626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.720023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.720054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.720461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.720492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.720747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.720777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.721227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.721257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.721766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.721798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.722203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.722235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.722694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.722727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.723146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.723178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.723572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.723604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.723896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.723929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.724253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.724283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.724637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.724670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.725089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.042 [2024-06-11 09:44:03.725126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.042 qpair failed and we were unable to recover it. 00:29:32.042 [2024-06-11 09:44:03.725440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.725471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.725933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.725964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.726433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.726466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.726862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.726892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.727211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.727242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.727670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.727702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.728113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.728143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.728645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.728676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.729083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.729115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.729625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.729656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.730055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.730085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.730420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.730452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.730913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.730944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.731363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.731395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.731709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.731737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.732192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.732221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.732651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.732681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.733075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.733104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.733518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.733550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.733989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.734019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.734331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.734366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.734840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.734870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.735303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.735344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.735539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.735569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.735975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.736004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.736451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.736482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.736904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.736939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.737366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.737397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.737825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.737854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.738288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.043 [2024-06-11 09:44:03.738327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.043 qpair failed and we were unable to recover it. 00:29:32.043 [2024-06-11 09:44:03.738777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.738807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.739076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.739106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.739428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.739460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.739879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.739910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.740107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.740140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.740574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.740605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.741007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.741037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.741341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.741370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.741820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.741848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.742147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.742174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.742653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.742683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.743071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.743100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.743462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.743494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.743887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.743916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.744302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.744342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.744673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.744703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.745101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.745131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.745558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.745589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.746009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.746038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.746474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.746504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.746923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.746953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.747270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.747300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.747654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.747684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.748025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.748056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.748546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.748577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.748867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.748895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.749303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.749346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.749801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.749832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.750122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.750153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.750587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.750618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.751015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.751045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.751225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.751256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.044 [2024-06-11 09:44:03.751733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.044 [2024-06-11 09:44:03.751764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.044 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.752198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.752228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.752467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.752497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.752849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.752879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.753213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.753250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.753613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.753645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.754067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.754096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.754785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.754831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.755201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.755234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.755438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.755470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.755914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.755945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.756413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.756443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.756769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.756801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.757225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.757255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.757546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.757580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.757995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.758024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.758310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.758352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.758759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.758788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.759274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.759305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.759757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.759789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.760215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.760246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.760709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.760740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.761072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.761102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.761555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.761585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.762016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.762046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.762474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.762504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.762978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.763009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.045 [2024-06-11 09:44:03.763431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.045 [2024-06-11 09:44:03.763462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.045 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.763834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.763863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.764250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.764279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.764709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.764739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.765161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.765193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.765597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.765628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.766090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.766119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.766470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.766501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.766933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.766964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.767396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.767425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.767842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.767872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.768285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.768335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.768773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.768803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.769107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.769137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.769535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.769567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.769975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.770005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.770307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.770349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.770803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.770840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.771276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.771306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.771758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.771787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.772207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.772236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.772566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.772598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.772982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.773013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.773430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.773462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.773759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.773790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.774204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.774232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.774725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.774755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.775167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.775199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.775509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.775539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.775974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.776005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.776430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.776460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.776790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.046 [2024-06-11 09:44:03.776822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.046 qpair failed and we were unable to recover it. 00:29:32.046 [2024-06-11 09:44:03.777242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.777272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.777681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.777713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.778000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.778036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.778457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.778488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.778805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.778835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.779186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.779215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.779617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.779648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.780050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.780080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.780498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.780528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.780965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.780994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.781436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.781467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.781930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.781960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.782337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.782370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.782795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.782825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.783253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.783283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.783758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.783789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.784210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.784241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.784677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.784707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.785149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.785179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.785588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.785618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.786043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.786073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.786376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.786404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.786848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.786877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.787327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.787358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.787834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.787863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.788279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.788325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.788790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.788819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.789136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.789167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.789563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.789593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.047 [2024-06-11 09:44:03.790011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.047 [2024-06-11 09:44:03.790040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.047 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.790433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.790464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.790888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.790917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.791349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.791379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.791812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.791843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.792261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.792291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.792747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.792778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.793200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.793229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.793652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.793683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.794107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.794136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.794461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.794494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.794921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.794951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.795285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.795323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.795706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.795736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.796218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.796249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.796664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.796695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.797108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.797139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.797547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.797578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.797997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.798027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.798412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.798443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.798868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.798897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.799193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.799224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.799722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.799754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.800183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.800213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.800551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.800583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.801012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.801041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.801461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.801492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.801899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.801929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.802356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.802387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.802835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.802864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.803283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.803313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.803727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.803756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.048 [2024-06-11 09:44:03.804088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.048 [2024-06-11 09:44:03.804118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.048 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.804527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.804556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.805044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.805073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.805389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.805425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.805804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.805840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.806295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.806332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.806755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.806786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.807223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.807253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.807533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.807562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.808019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.808048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.808580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.808686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.809220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.809257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.809627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.809659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.810077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.810107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.810528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.810559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.811030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.811059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.811557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.811661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.812141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.812178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.812594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.812627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.813052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.813083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.813492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.813523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.813838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.813869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.814159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.814188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.814550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.814584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.814991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.815021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.815441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.815472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.815890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.815919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.816344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.816375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.816809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.816839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.817260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.817289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.817722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.817754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.818173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.818203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.818459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.818498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.818910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.818940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.049 qpair failed and we were unable to recover it. 00:29:32.049 [2024-06-11 09:44:03.819350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.049 [2024-06-11 09:44:03.819380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.819835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.819865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.820276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.820305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.820738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.820769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.821139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.821168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.821555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.821586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.821998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.822027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.822448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.822479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.822889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.822919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.823353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.823384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.823797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.823834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.824237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.824267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.824681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.824711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.825071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.825101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.825525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.825556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.825970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.826000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.826407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.826438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.826834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.826865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.827298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.827337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.827785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.827816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.828290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.828327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.828803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.828833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.829248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.829277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.829681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.829712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.830121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.830151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.830640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.830745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.831227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.831264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.831747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.831780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.832173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.832204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.050 [2024-06-11 09:44:03.832605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.050 [2024-06-11 09:44:03.832636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.050 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.833053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.833086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.833494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.833527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.833936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.833966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.834379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.834412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.834823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.834854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.835246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.835276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.835688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.835721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.836134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.836164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.836577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.836607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.836909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.836945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.837366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.837399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.837849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.837880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.838291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.838332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.838749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.838778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.839185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.839215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.839609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.839640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.840048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.840078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.840509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.840540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.840993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.841024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.841457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.841487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.841792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.841834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.842263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.842294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.842695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.842725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.843148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.843178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.843609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.843641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.844062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.844091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.844465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.844496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.323 qpair failed and we were unable to recover it. 00:29:32.323 [2024-06-11 09:44:03.844906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.323 [2024-06-11 09:44:03.844937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.845352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.845383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.845812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.845842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.846248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.846279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.846703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.846733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.847154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.847185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.847674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.847706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.848068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.848098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.848517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.848547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.848954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.848983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.849420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.849451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.849890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.849924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.850351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.850383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.850838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.850868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.851290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.851330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.851783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.851813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.852203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.852232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.852637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.852668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.853103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.853133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.853524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.853555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.853979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.854010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.854412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.854443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.854862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.854892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.855301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.855339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.855794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.855825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.856235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.856265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.856643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.856674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.857075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.857105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.857548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.857579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.857881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.857912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.858337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.858367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.858781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.858810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.859240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.859269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.859654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.859690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.860109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.860139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.860548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.860579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.860997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.861027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.861491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.861522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.324 qpair failed and we were unable to recover it. 00:29:32.324 [2024-06-11 09:44:03.861940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.324 [2024-06-11 09:44:03.861971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.862379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.862410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.862833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.862865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.863266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.863296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.863684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.863715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.864120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.864150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.864569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.864599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.864970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.865000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.865434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.865465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.865888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.865920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.866224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.866254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.866647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.866678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.867098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.867128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.867539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.867569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.868012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.868042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.868477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.868508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.868931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.868963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.869369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.869399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.869848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.869877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.870292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.870328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.870725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.870755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.871152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.871182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.871641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.871673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.872095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.872124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.872551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.872581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.872995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.873025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.873406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.873437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.873847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.873877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.874295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.874332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.874744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.874773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.875196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.875226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.875642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.875673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.876029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.876060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.876478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.876507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.876922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.876951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.325 [2024-06-11 09:44:03.877367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.325 [2024-06-11 09:44:03.877404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.325 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.877828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.877857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.878264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.878294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.878743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.878774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.879184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.879215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.879617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.879649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.880058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.880088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.880554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.880584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.880992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.881022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.881456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.881487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.881896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.881926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.882334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.882365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.882754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.882784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.883198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.883228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.883634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.883666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.884074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.884104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.884525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.884556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.884976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.885006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.885420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.885450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.885844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.885873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.886273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.886303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.886703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.886734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.887136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.887166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.887485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.887520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.887933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.887964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.888390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.888420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.888877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.888907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.889361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.889394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.889846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.889876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.890301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.890342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.890675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.890703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.891125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.891155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.891529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.891560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.891946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.891978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.892390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.892420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.892852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.892881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.893295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.893333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.893718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.893749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.326 qpair failed and we were unable to recover it. 00:29:32.326 [2024-06-11 09:44:03.894159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.326 [2024-06-11 09:44:03.894189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.894614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.894644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.895055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.895091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.895470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.895501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.895903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.895933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.896352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.896383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.896838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.896868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.897286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.897335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.897761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.897793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.898217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.898247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.898660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.898691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.899106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.899135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.899547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.899579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.899998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.900031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.900491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.900521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.900950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.900979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.901387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.901418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.901837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.901867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.902276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.902306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.902734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.902764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.903020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.903052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.903454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.903484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.903893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.903924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.904348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.904378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.904804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.904832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.905212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.905241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.905668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.905698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.906096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.906126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.906553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.906583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.907002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.907034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.327 [2024-06-11 09:44:03.907408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.327 [2024-06-11 09:44:03.907438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.327 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.907868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.907899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.908311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.908349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.908783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.908813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.909230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.909260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.909683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.909714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.910086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.910115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.910522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.910552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.910845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.910877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.911304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.911347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.911851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.911881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.912292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.912333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.912742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.912778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.913090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.913122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.913547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.913578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.914000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.914032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.914551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.914657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.915188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.915226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.915673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.915705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.916124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.916155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.916592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.916622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.917084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.917114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.917528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.917561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.917951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.917982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.918401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.918432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.918857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.918887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.919259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.919289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.919719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.919749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.920152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.920183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.920602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.920632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.921033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.921063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.921480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.921511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.921830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.921866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.922335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.328 [2024-06-11 09:44:03.922366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.328 qpair failed and we were unable to recover it. 00:29:32.328 [2024-06-11 09:44:03.922786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.922816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.923237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.923266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.923655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.923686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.924098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.924127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.924529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.924559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.924992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.925022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.925440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.925471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.925872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.925903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.926309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.926348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.926784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.926812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.927187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.927216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.927595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.927627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.928035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.928064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.928455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.928485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.928908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.928937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.929357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.929388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.929827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.929855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.930273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.930302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.930718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.930754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.931175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.931205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.931524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.931553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.931966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.931996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.932402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.932433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.932863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.932892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.933186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.933214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.933614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.933645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.934056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.934085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.934543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.934573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.934984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.935015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.935436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.935467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.935891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.935921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.936340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.936370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.936780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.936809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.937227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.937257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.937652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.937684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.938114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.938144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.938596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.938627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.329 [2024-06-11 09:44:03.938923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.329 [2024-06-11 09:44:03.938957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.329 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.939393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.939422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.939850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.939880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.940183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.940216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.940606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.940637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.941096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.941126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.941466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.941500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.941964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.941994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.942356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.942388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.942848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.942879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.943299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.943340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.943746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.943777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.944244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.944274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.944703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.944733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.945172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.945202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.945633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.945664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.945962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.945996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.946405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.946436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.946854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.946883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.947290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.947328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.947632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.947662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.948093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.948128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.948420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.948453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.948875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.948905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.949361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.949392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.949805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.949837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.950254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.950284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.950740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.950773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.951199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.951227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.951641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.951671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.952088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.952117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.952528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.952559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.952976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.953007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.953423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.953453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.953888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.330 [2024-06-11 09:44:03.953918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.330 qpair failed and we were unable to recover it. 00:29:32.330 [2024-06-11 09:44:03.954332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.954365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.954814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.954844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.955146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.955177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.955583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.955614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.956020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.956050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.956472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.956503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.956916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.956945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.957324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.957355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.957760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.957789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.958218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.958248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.958644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.958675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.959111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.959142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.959561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.959592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.960004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.960034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.960450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.960480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.960879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.960909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.961324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.961355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.961793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.961822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.962237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.962266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.962688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.962719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.963123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.963154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.963572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.963605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.964090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.964121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.964530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.964561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.964971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.965001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.965463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.965493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.965910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.965938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.966366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.966398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.966817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.966846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.967266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.967297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.967713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.967745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.968172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.968203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.968622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.968654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.969067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.969098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.969515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.969546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.969964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.969994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.970403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.970435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.331 [2024-06-11 09:44:03.970899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.331 [2024-06-11 09:44:03.970929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.331 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.971349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.971380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.971814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.971844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.972182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.972212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.972624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.972657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.973067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.973097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.973500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.973531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.973941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.973970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.974409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.974440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.974858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.974888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.975331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.975362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.975774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.975803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.976233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.976263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.976662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.976693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.977096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.977127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.977493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.977525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.977922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.977958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.978263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.978293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.978794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.978826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.979269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.979298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.979723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.979754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.980207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.980236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.980630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.980661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.981069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.981099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.981488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.981520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.981932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.981962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.332 qpair failed and we were unable to recover it. 00:29:32.332 [2024-06-11 09:44:03.982359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.332 [2024-06-11 09:44:03.982391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.333 qpair failed and we were unable to recover it. 00:29:32.333 [2024-06-11 09:44:03.982805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.333 [2024-06-11 09:44:03.982834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.333 qpair failed and we were unable to recover it. 00:29:32.333 [2024-06-11 09:44:03.983238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.333 [2024-06-11 09:44:03.983268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.333 qpair failed and we were unable to recover it. 00:29:32.333 [2024-06-11 09:44:03.983704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.333 [2024-06-11 09:44:03.983737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.333 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.984160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.984190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.984620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.984651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.985075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.985106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.985517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.985549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.985919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.985949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.986363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.986394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.986823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.986853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.987276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.987307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.987719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.987749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.988252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.988281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.988691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.988723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.989213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.989242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.989632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.989663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.990100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.990131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.990556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.990587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.990961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.990993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.991407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.991438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.991895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.991925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.992347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.992377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.992780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.992810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.993232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.993261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.993686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.993718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.994014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.994046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.994463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.994494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.994908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.994938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.995350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.995382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.995835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.995871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.996286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.996323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.996737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.996767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.997181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.997211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.997512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.997545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.997943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.997972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.998371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.998402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.998821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.998852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.999269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.999300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:03.999682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:03.999715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:04.000128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.334 [2024-06-11 09:44:04.000159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.334 qpair failed and we were unable to recover it. 00:29:32.334 [2024-06-11 09:44:04.000550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.000581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.000992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.001022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.001442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.001473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.001795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.001825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.002224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.002255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.002691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.002721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.003011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.003040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.003462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.003493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.003911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.003940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.004328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.004360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.004726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.004757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.005177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.005207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.005626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.005656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.006078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.006107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.006512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.006543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.006957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.006987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.007398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.007430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.007863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.007893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.008304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.008344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.008781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.008812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.009265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.009294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.009719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.009751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.010192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.010222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.010635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.010666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.011085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.011116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.011543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.011572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.011984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.012014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.012434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.012466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.012892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.012922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.013361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.013403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.013813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.013843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.014259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.014290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.014717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.014751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.015165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.015195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.015622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.015653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.016070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.016100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.016505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.016535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.016914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.335 [2024-06-11 09:44:04.016943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.335 qpair failed and we were unable to recover it. 00:29:32.335 [2024-06-11 09:44:04.017357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.017388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.017821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.017851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.018259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.018289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.018709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.018741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.019152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.019182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.019597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.019630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.020037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.020066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.020495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.020526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.020864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.020896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.021298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.021338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.021768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.021798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.022208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.022237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.022655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.022686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.023115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.023145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.023557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.023588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.024009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.024041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.024455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.024487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.024902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.024934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.025346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.025377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.025752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.025782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.026185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.026215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.026609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.026640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.027043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.027073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.027453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.027484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.027914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.027944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.028365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.028396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.028689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.028720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.029156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.029186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.029603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.029633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.030064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.030093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.030517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.030548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.030961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.030997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.031404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.031435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.031862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.031892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.032294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.032333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.032678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.032708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.033125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.033156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.336 qpair failed and we were unable to recover it. 00:29:32.336 [2024-06-11 09:44:04.033588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.336 [2024-06-11 09:44:04.033617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.034027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.034056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.034489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.034519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.034933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.034964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.035374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.035406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.035809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.035839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.036294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.036332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.036741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.036772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.037184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.037215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.037624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.037656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.038081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.038110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.038528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.038559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.038984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.039014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.039425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.039456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.039855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.039885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.040297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.040335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.040776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.040806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.041238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.041268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.041695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.041727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.042211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.042241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.042627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.042659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.043090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.043120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.043430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.043460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.043882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.043911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.044334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.044366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.044780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.044810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.045215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.045246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.045679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.045711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.046120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.046151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.046640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.046745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.047287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.047351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.047796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.047826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.048250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.048280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.048693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.048725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.049164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.049208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.049626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.049659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.050078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.050111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.050469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.337 [2024-06-11 09:44:04.050501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.337 qpair failed and we were unable to recover it. 00:29:32.337 [2024-06-11 09:44:04.050945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.050975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.051383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.051415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.051864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.051895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.052304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.052345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.052790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.052820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.053239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.053269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.053569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.053603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.054013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.054043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.054345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.054378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.054808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.054838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.055267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.055296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.055742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.055773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.056188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.056217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.056642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.056673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.057055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.057084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.057495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.057526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.057940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.057969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.058387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.058418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.058851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.058882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.059372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.059403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.059831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.059862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.060231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.060261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.060674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.060703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.061102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.061132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.061559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.061589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.061871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.061901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.062333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.062363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.062766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.062796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.063218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.063247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.063662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.063693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.064061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.064092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.064387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.064418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.064826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.338 [2024-06-11 09:44:04.064855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.338 qpair failed and we were unable to recover it. 00:29:32.338 [2024-06-11 09:44:04.065264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.065293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.065718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.065748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.066161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.066192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.066605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.066643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.067053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.067083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.067478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.067509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.067810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.067842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.068257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.068285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.068681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.068711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.069137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.069166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.069524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.069555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.069918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.069949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.070360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.070391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.070835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.070865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.071275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.071305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.071737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.071768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.072188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.072218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.072495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.072527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.072963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.072992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.073402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.073433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.073852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.073883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.074251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.074281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.074724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.074756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.075175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.075205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.075625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.075659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.076084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.076115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.076522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.076553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.076970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.077000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.077415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.077446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.077854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.077883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.078293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.078356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.078752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.078782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.079197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.079226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.079717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.079750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.080049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.080083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.339 qpair failed and we were unable to recover it. 00:29:32.339 [2024-06-11 09:44:04.080482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.339 [2024-06-11 09:44:04.080512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.080923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.080953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.081351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.081383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.081790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.081820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.082260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.082290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.082711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.082742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.083166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.083196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.083558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.083591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.084009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.084046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.084453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.084483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.084791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.084820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.085240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.085270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.085696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.085727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.086101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.086131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.086500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.086530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.086922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.086952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.087365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.087397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.087816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.087846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.088274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.088304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.088688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.088718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.089145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.089175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.089602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.089632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.090056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.090087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.090496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.090526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.090960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.090989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.091406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.091437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.091873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.091903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.092313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.092352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.092765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.092795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.093234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.093264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.093668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.093698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.094105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.094137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.094600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.094630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.095051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.095079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.095517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.095548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.095948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.095979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.096367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.096398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.096812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.340 [2024-06-11 09:44:04.096841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.340 qpair failed and we were unable to recover it. 00:29:32.340 [2024-06-11 09:44:04.097254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.097284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.097596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.097629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.098048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.098077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.098484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.098515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.098934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.098964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.099372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.099403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.099829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.099857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.100238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.100269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.100728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.100760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.101176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.101205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.101499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.101547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.101954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.101985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.102401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.102433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.102849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.102880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.103294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.103343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.103744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.103774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.104191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.104221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.104517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.104550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.104979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.105009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.105410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.105441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.105909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.105939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.106307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.106345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.106716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.106746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.107151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.107180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.107603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.107635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.107921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.107952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.108400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.108432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.108859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.108889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.109170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.109200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.109622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.109652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.110071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.110101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.110520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.110550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.110965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.110995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.111408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.111440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.111846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.341 [2024-06-11 09:44:04.111876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.341 qpair failed and we were unable to recover it. 00:29:32.341 [2024-06-11 09:44:04.112288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.112327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.112545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.112573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.113020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.113050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.113463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.113495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.113930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.113960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.114390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.114421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.114831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.114861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.115288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.115339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.115767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.115797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.116223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.116254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.116670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.116702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.117082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.117112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.117518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.117549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.117978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.118007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.118284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.118325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.118625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.118665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.119074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.119103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.119517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.119547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.119957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.119986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.120399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.120430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.120848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.120879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.121295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.121338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.121753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.121784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.122191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.122223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.122637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.122668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.123077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.123107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.123520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.123552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.123971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.124002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.124415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.124446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.124856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.124887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.125299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.125339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.125753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.125783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.126158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.126188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.342 [2024-06-11 09:44:04.126647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.342 [2024-06-11 09:44:04.126679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.342 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.127094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.127127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.127548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.127578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.127990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.128020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.128433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.128464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.128898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.128929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.129358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.129388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.129720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.129749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.130184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.130213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.130626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.130657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.131078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.131108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.131521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.131551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.131957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.131988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.132360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.132390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.132845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.132876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.133283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.133313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.133741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.133771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.134249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.134279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.134572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.134605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.135009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.135039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.135465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.135496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.135800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.135829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.613 [2024-06-11 09:44:04.136226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.613 [2024-06-11 09:44:04.136263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.613 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.136691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.136721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.136998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.137030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.137419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.137449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.137858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.137887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.138259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.138290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.138678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.138710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.139120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.139151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.139522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.139552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.139966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.139996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.140412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.140443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.140860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.140890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.141366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.141396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.141793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.141823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.142251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.142280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.142688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.142719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.143128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.143158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.143574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.143605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.144026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.144056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.144468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.144500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.144944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.144974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.145407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.145437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.145814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.145844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.146262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.146292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.146714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.146746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.147156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.147187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.147603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.147635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.148074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.148104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.148540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.148571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.148947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.148976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.149386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.149417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.149835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.149866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.150276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.150308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.150726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.150756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.151185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.151216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.151556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.151589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.152004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.152034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.152443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.152474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.614 [2024-06-11 09:44:04.152894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.614 [2024-06-11 09:44:04.152924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.614 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.153379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.153409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.153696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.153733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.154137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.154166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.154550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.154582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.154992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.155021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.155356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.155387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.155797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.155827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.156261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.156290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.156585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.156619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.157052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.157082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.157493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.157524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.157942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.157973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.158346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.158375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.158805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.158834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.159239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.159269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.159736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.159768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.160187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.160216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.160638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.160669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.161087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.161117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.161546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.161576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.161992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.162022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.162434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.162465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.162908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.162937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.163353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.163384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.163801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.163831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.164250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.164279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.164574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.164607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.165025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.165054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.165425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.165457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.165876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.165906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.166331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.166361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.166570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.166599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.167019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.167049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.167472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.167502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.167912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.167941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.168368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.168398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.168814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.168844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.615 [2024-06-11 09:44:04.169258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.615 [2024-06-11 09:44:04.169287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.615 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.169662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.169693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.170134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.170164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.170564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.170594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.171016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.171053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.171350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.171384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.171832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.171862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.172271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.172301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.172728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.172757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.173160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.173191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.173610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.173639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.174059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.174089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.174397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.174430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.174888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.174919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.175352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.175383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.175789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.175820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.176239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.176269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.176664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.176694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.177118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.177147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.177555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.177585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.177998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.178029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.178440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.178472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.178679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.178710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.179165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.179196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.179601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.179633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.180051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.180080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.180499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.180529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.180938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.180968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.181432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.181463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.181748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.181779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.182202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.182232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.182726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.182758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.183179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.183209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.183629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.183659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.184085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.184115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.184529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.184561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.184980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.185009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.185419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.185450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.185875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.616 [2024-06-11 09:44:04.185904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.616 qpair failed and we were unable to recover it. 00:29:32.616 [2024-06-11 09:44:04.186210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.186244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.186634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.186665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.186970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.186998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.187390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.187420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.187732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.187763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.188215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.188244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.188657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.188688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.189114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.189144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.189564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.189594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.190017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.190047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.190458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.190489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.190936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.190967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.191374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.191404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.191867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.191898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.192340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.192371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.192661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.192692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.193122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.193151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.193579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.193610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.194063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.194094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.194516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.194548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.194949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.194979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.195410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.195440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.195894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.195923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.196344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.196376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.196740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.196773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.197183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.197213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.197632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.197663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.198090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.198121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.198415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.198446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.198865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.198895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.199306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.199344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.199800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.199830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.617 [2024-06-11 09:44:04.200236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.617 [2024-06-11 09:44:04.200275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.617 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.200705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.200736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.201147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.201178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.201519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.201550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.201958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.201988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.202407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.202437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.202851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.202881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.203313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.203365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.203786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.203815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.204102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.204133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.204555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.204584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.205007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.205037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.205446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.205476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.205877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.205907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.206325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.206357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.206793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.206822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.207233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.207262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.207682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.207712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.208124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.208154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.208686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.208791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.209268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.209306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.209772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.209803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.210256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.210286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.210712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.210744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.211111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.211142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.211665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.211768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.212262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.212301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.212687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.212724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.213152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.213183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.213599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.213631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.214039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.214070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.214495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.214526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.214936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.214966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.215390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.215424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.215880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.215910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.216338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.216370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.216780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.618 [2024-06-11 09:44:04.216809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.618 qpair failed and we were unable to recover it. 00:29:32.618 [2024-06-11 09:44:04.217230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.217260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.217669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.217700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.218121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.218151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.218555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.218593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.219005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.219035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.219448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.219479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.219929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.219960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.220366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.220397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.220796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.220825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.221224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.221254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.221684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.221714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.222132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.222161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.222594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.222626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.223066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.223096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.223482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.223513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.223920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.223950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.224373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.224403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.224832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.224862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.225285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.225326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.225752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.225781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.226196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.226225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.226643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.226676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.227138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.227169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.227594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.227625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.228053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.228083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.228498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.228528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.228957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.228987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.229391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.229421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.229843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.229872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.230287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.230327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.230753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.230784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.231197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.231227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.231620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.231653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.232018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.232046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.232517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.232548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.232916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.232945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.233367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.233397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.233813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.619 [2024-06-11 09:44:04.233845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.619 qpair failed and we were unable to recover it. 00:29:32.619 [2024-06-11 09:44:04.234260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.234289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.234708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.234738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.235167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.235198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.235620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.235653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.235963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.235997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.236306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.236359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.236793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.236824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.237116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.237145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.237579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.237609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.238020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.238050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.238357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.238387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.238799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.238830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.239250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.239279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.239693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.239725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.240146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.240178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.240605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.240636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.241053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.241084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.241575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.241606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.242027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.242057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.242465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.242496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.242891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.242920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.243345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.243376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.243760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.243790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.244192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.244223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.244627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.244656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.245073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.245103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.245517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.245549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.245946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.245977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.246279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.246309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.246746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.246775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.247205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.247236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.247667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.247699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.248077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.248108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.248510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.248542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.248954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.248983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.249394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.249426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.249856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.249886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.250386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.250419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.620 [2024-06-11 09:44:04.250709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.620 [2024-06-11 09:44:04.250737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.620 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.251144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.251174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.251598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.251629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.252053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.252082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.252376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.252405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.252833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.252861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.253286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.253326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.253785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.253822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.254228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.254257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.254649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.254680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.255106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.255135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.255513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.255543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.255896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.255927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.256354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.256385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.256821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.256851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.257335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.257366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.257809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.257838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.258250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.258280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.258707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.258738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.259153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.259184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.259615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.259645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.260054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.260084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.260507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.260538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.260954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.260984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.261410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.261439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.261847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.261877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.262303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.262344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.262690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.262720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.262986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.263015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.263259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.263289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.263721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.263752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.264163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.264192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.264605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.264638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.265049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.265079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.265507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.265537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.265957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.265986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.266403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.266433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.266851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.266882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.621 [2024-06-11 09:44:04.267258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.621 [2024-06-11 09:44:04.267288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.621 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.267736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.267765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.268193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.268222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.268691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.268722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.269146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.269176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.269528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.269558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.269983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.270013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.270438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.270469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.270921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.270950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.271358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.271395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.271816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.271844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.272293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.272334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.272744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.272774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.273181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.273210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.273607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.273639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.274046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.274076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.274461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.274492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.274919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.274949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.622 qpair failed and we were unable to recover it. 00:29:32.622 [2024-06-11 09:44:04.275380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.622 [2024-06-11 09:44:04.275411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.275709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-06-11 09:44:04.275744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.276172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-06-11 09:44:04.276202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.276670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-06-11 09:44:04.276701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.277121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-06-11 09:44:04.277151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.277574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-06-11 09:44:04.277606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.278015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-06-11 09:44:04.278046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.278438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-06-11 09:44:04.278468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.278829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-06-11 09:44:04.278860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.279281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-06-11 09:44:04.279312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.279744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-06-11 09:44:04.279775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.280152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-06-11 09:44:04.280182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.280563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-06-11 09:44:04.280594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.281007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-06-11 09:44:04.281036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.281455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.623 [2024-06-11 09:44:04.281486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.623 qpair failed and we were unable to recover it. 00:29:32.623 [2024-06-11 09:44:04.281900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.281931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.282344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.282374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.282761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.282791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.283209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.283238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.283612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.283644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.284090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.284120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.284534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.284564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.284989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.285018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.285435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.285467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.285844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.285873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.286289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.286328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.286740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.286770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.287182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.287212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.287648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.287679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.288111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.288140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.288535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.288566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.288973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.289008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.289375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.289407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.289827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.289859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.290279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.290309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.290741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.290773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.291178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.291208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.291628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.291660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.292028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.292059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.292452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.292482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.292894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.292924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.293346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.293378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.293653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.293685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.294087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.294117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.294532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.294563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.294975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.295005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.295425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.295456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.295884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.295914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.296294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.296358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.296731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.296761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.297167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.297198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.297626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.297657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.298085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.624 [2024-06-11 09:44:04.298115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.624 qpair failed and we were unable to recover it. 00:29:32.624 [2024-06-11 09:44:04.298523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.298553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.298975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.299005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.299423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.299453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.299885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.299915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.300342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.300384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.300840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.300873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.301287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.301337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.301781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.301811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.302235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.302266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.302698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.302732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.303102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.303130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.303636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.303745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.304239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.304276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.304854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.304888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.305293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.305343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.305751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.305781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.306197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.306229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.306638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.306669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.307070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.307113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.307382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.307419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.307826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.307857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.308274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.308304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.308730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.308759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.309191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.309221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.309636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.309669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.310091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.310121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.310526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.310558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.310970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.310999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.311411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.311441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.311829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.311859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.312133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.312166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.312584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.312616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.313031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.313061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.313491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.313522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.313937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.313967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.314405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.314437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.314733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.314765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.315132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.315161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.625 [2024-06-11 09:44:04.315504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.625 [2024-06-11 09:44:04.315537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.625 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.315940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.315970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.316402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.316434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.316862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.316891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.317299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.317340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.317731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.317766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.318174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.318203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.318630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.318662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.319072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.319104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.319541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.319574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.319980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.320011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.320434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.320467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.320787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.320823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.321263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.321293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.321787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.321820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.322215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.322245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.322660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.322691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.323110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.323140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.323521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.323554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.324005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.324035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.324451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.324491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.324914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.324946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.325355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.325389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.325817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.325848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.326260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.326290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.326752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.326783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.327185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.327218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.327608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.327640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.328052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.328084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.328476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.328508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.328914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.328946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.329363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.329397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.329809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.329839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.330261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.330291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.330742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.330776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.331140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.331170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.331522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.331552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.331980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.332008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.332418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.626 [2024-06-11 09:44:04.332449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.626 qpair failed and we were unable to recover it. 00:29:32.626 [2024-06-11 09:44:04.332856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.332886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.333296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.333335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.333747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.333777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.334191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.334222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.334632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.334664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.335074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.335104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.335535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.335568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.335975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.336004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.336456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.336488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.336896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.336926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.337355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.337387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.337793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.337822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.338194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.338224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.338540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.338571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.338994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.339023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.339444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.339474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.339911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.339941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.340343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.340375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.340805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.340835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.341264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.341293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.341742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.341773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.342140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.342173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.342557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.342587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.342995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.343026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.343448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.343478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.343960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.343990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.344408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.344438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.344863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.344892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.345309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.345349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.345779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.345809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.346244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.346275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.346709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.346740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.347160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.347192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.347609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.347640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.348071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.348101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.348492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.348524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.348968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.348997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.349403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.627 [2024-06-11 09:44:04.349434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.627 qpair failed and we were unable to recover it. 00:29:32.627 [2024-06-11 09:44:04.349854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.349884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.350293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.350332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.350702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.350737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.351140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.351171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.351484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.351513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.351934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.351963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.352384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.352415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.352821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.352852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.353262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.353292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.353706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.353735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.354109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.354139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.354539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.354570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.354988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.355017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.355428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.355460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.355904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.355935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.356334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.356365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.356800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.356829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.357247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.357276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.357664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.357696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.358095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.358124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.358550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.358580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.358956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.358985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.359366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.359397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.359686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.359724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.360140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.360169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.360587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.360619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.360996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.361025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.361432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.361463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.361889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.361919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.362341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.362371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.362772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.362801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.363212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.363242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.363619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.363650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.628 qpair failed and we were unable to recover it. 00:29:32.628 [2024-06-11 09:44:04.364064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.628 [2024-06-11 09:44:04.364095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.364504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.364535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.364952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.364981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.365397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.365428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.365848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.365878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.366144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.366177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.366614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.366644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.367066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.367096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.367510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.367542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.367950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.367979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.368385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.368416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.368845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.368875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.369294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.369334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.369745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.369775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.370143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.370174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.370592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.370622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.371032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.371062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.371482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.371513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.371924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.371954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.372376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.372407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.372816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.372845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.373266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.373296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.373743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.373774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.374189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.374219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.374635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.374666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.375093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.375123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.375536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.375567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.375985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.376015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.376436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.376466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.376906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.376934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.377347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.377384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.377790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.377820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.378239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.378269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.378713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.378744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.379156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.379185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.379604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.379635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.380041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-06-11 09:44:04.380070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.629 qpair failed and we were unable to recover it. 00:29:32.629 [2024-06-11 09:44:04.380490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.380520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.380927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.380958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.381380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.381411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.381824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.381853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.382137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.382170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.382590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.382621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.383036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.383066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.383478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.383509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.383936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.383965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.384376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.384406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.384840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.384869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.385278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.385308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.385715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.385746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.386156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.386186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.386615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.386645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.387066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.387095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.387515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.387547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.387968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.387997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.388325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.388355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.388764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.388794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.389217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.389248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.389654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.389686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.390110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.390140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.390553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.390583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.391004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.391035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.391443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.391476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.391896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.391927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.392343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.392375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.392756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.392786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.393198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.393228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.393627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.393658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.394067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.394098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.394509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.394541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.394948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.394985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.395406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.395437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.395861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.395890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.396259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.396289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.630 [2024-06-11 09:44:04.396741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-06-11 09:44:04.396773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.630 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.397199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.397229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.397646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.397678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.398057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.398087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.398482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.398514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.398935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.398965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.399379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.399410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.399777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.399808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.400208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.400239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.400631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.400665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.401077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.401109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.401525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.401556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.401964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.401995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.402459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.402489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.402774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.402806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.403191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.403222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.403628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.403660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.404060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.404090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.404487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.404518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.404932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.404962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.405372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.405403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.405848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.405877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.406175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.406207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.406627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.406658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.407065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.407094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.407514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.407545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.407953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.407984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.408408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.408438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.408810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.408841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.409238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.409268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.409701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.409731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.410024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.410052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.410466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.410496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.410885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.410914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.411190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.411220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.411617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.411648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.412038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.631 [2024-06-11 09:44:04.412067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.631 qpair failed and we were unable to recover it. 00:29:32.631 [2024-06-11 09:44:04.412489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.632 [2024-06-11 09:44:04.412521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.632 qpair failed and we were unable to recover it. 00:29:32.632 [2024-06-11 09:44:04.412822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.632 [2024-06-11 09:44:04.412854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.632 qpair failed and we were unable to recover it. 00:29:32.632 [2024-06-11 09:44:04.413238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.632 [2024-06-11 09:44:04.413267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.632 qpair failed and we were unable to recover it. 00:29:32.632 [2024-06-11 09:44:04.413717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.632 [2024-06-11 09:44:04.413749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.632 qpair failed and we were unable to recover it. 00:29:32.632 [2024-06-11 09:44:04.414161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.632 [2024-06-11 09:44:04.414190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.632 qpair failed and we were unable to recover it. 00:29:32.632 [2024-06-11 09:44:04.414604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.632 [2024-06-11 09:44:04.414634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.632 qpair failed and we were unable to recover it. 00:29:32.632 [2024-06-11 09:44:04.415050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.632 [2024-06-11 09:44:04.415081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.632 qpair failed and we were unable to recover it. 00:29:32.632 [2024-06-11 09:44:04.415493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.632 [2024-06-11 09:44:04.415524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.632 qpair failed and we were unable to recover it. 00:29:32.632 [2024-06-11 09:44:04.415944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.632 [2024-06-11 09:44:04.415973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.632 qpair failed and we were unable to recover it. 00:29:32.632 [2024-06-11 09:44:04.416385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.632 [2024-06-11 09:44:04.416416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.632 qpair failed and we were unable to recover it. 00:29:32.632 [2024-06-11 09:44:04.416846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.632 [2024-06-11 09:44:04.416875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.632 qpair failed and we were unable to recover it. 00:29:32.632 [2024-06-11 09:44:04.417284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.632 [2024-06-11 09:44:04.417324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.632 qpair failed and we were unable to recover it. 00:29:32.632 [2024-06-11 09:44:04.417736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.632 [2024-06-11 09:44:04.417766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.632 qpair failed and we were unable to recover it. 00:29:32.632 [2024-06-11 09:44:04.418175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.632 [2024-06-11 09:44:04.418206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.632 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.418603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.418636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.419051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.419083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.419475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.419505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.419915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.419947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.420361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.420391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.420802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.420834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.421243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.421275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.421684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.421714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.422125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.422156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.422564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.422595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.422968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.422998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.423403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.423433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.423856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.423892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.424298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.424339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.424756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.424788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.425193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.425223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.425610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.425643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.902 [2024-06-11 09:44:04.426045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.902 [2024-06-11 09:44:04.426075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.902 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.426437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.426469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.426889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.426919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.427342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.427373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.427654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.427685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.428103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.428133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.428543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.428574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.428999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.429028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.429443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.429474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.429911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.429941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.430349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.430401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.430820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.430849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.431255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.431285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.431652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.431681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.432114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.432144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.432580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.432613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.433017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.433046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.433431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.433462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.433885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.433914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.434338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.434369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.434785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.434815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.435236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.435266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.435662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.435694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.435974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.436003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.436361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.436394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.436856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.436886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.437174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.437207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.437623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.437655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.438065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.438094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.438518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.438549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.438959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.438988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.439406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.439437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.439852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.439882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.440303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.440347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.440794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.440824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.441249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.441287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.441739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.441769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.442183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.903 [2024-06-11 09:44:04.442213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.903 qpair failed and we were unable to recover it. 00:29:32.903 [2024-06-11 09:44:04.442633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.442664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.443099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.443130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.443542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.443573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.443989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.444020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.444334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.444365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.444800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.444830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.445235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.445265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.445692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.445724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.446134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.446163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.446594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.446625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.447067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.447098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.447553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.447584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.447989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.448019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.448444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.448475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.448889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.448920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.449335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.449365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.449775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.449806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.450224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.450255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.450667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.450706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.451113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.451145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.451643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.451748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.452280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.452337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.452764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.452796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.453219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.453248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.453708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.453740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.454161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.454191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.454612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.454644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.455068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.455098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.455557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.455588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.455994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.456025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.456279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.456313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.456727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.456760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.457177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.457208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.457638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.457669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.458072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.458103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.458526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.458557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.458849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.458880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.459338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.904 [2024-06-11 09:44:04.459378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.904 qpair failed and we were unable to recover it. 00:29:32.904 [2024-06-11 09:44:04.459689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.459717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.460125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.460156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.460578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.460609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.461032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.461063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.461471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.461504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.461903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.461933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.462415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.462448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.462894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.462926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.463339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.463371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.463849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.463880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.464161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.464192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.464604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.464635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.464897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.464926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.465347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.465378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.465795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.465825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.466130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.466158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.466594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.466625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.467047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.467078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.467389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.467418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.467845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.467875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.468244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.468274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.468694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.468726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.469137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.469167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.469599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.469631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.470042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.470073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.470495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.470527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.470981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.471011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.471445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.471476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.471699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.471727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.472144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.472174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.472597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.472628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.472925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.472954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.473254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.473283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.473730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.473762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.474172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.474203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.474561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.474594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.475000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.475030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.475455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.905 [2024-06-11 09:44:04.475486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.905 qpair failed and we were unable to recover it. 00:29:32.905 [2024-06-11 09:44:04.475858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.475888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.476192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.476232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.476647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.476678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.477100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.477131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.477416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.477448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.477879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.477908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.478337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.478370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.478790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.478820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.479230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.479261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.479626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.479659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.480063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.480094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.480525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.480557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.480962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.480992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.481415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.481447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.481856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.481886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.482330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.482363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.482713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.482744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.483072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.483100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.483514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.483544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.483823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.483853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.484268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.484297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.484725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.484756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.485183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.485212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.485578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.485610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.486023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.486053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.486477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.486506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.486927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.486958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.487378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.487408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.487829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.487859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.488258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.488287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.488718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.488749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.489172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.489203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.489663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.489696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.490118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.490146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.490557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.490588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.491010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.491040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.491458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.491488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.906 [2024-06-11 09:44:04.491936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.906 [2024-06-11 09:44:04.491965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.906 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.492221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.492250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.492681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.492712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.493127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.493157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.493454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.493496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.493939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.493968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.494390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.494421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.494744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.494776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.495115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.495143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.495551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.495581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.495998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.496027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.496332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.496364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.496809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.496840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.497251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.497280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.497635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.497669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.498102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.498132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.498505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.498536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.498911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.498940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.499362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.499393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.499682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.499713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.500013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.500045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.500456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.500486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.500932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.500965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.501349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.501382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.501802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.501832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.502246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.502277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.502692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.502723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.503145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.503176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.503611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.503641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.504048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.504077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.504362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.504396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.504862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.907 [2024-06-11 09:44:04.504892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.907 qpair failed and we were unable to recover it. 00:29:32.907 [2024-06-11 09:44:04.505157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.505185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.505651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.505682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.506103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.506132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.506549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.506580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.507018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.507047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.507440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.507472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.507905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.507934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.508350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.508380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.508810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.508839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.509255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.509284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.509723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.509756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.510176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.510206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.510608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.510644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.511007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.511037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.511474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.511505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.511910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.511940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.512359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.512389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.512799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.512830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.513198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.513227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.513626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.513657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.514109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.514139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.514548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.514578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.514990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.515019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.515440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.515470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.515913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.515942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.516236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.516267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.516717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.516747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.517024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.517055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.517470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.517500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.517914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.517944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.518377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.518407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.518713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.518743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.519155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.519184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.519606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.519637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.520023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.520053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.520461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.520490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.520923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.520952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.521366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.908 [2024-06-11 09:44:04.521397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.908 qpair failed and we were unable to recover it. 00:29:32.908 [2024-06-11 09:44:04.521824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.521855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.522267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.522297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.522769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.522801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.523170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.523199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.523598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.523629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.524037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.524066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.524494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.524524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.524930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.524960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.525387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.525418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.525833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.525862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.526306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.526351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.526780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.526809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.527227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.527256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.527670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.527700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.528113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.528148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.528520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.528550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.528960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.528989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.529481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.529512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.529933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.529962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.530375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.530407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.530826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.530855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.531266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.531295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.531717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.531748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.532159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.532188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.532594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.532624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.533078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.533108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.533569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.533600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.534010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.534039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.534339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.534372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.534814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.534845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.535264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.535292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.535669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.535698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.536117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.536146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.536694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.536799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.537376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.537418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.537762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.537794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.538211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.538240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.909 [2024-06-11 09:44:04.538641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.909 [2024-06-11 09:44:04.538672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.909 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.539094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.539123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.539733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.539837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.540339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.540379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.540816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.540848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.541301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.541347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.541787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.541817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.542238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.542269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.542803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.542909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.543531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.543635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.544157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.544195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.544600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.544634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.545041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.545071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.545448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.545480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.545916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.545947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.546368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.546399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.546820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.546850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.547158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.547208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.547626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.547658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.547943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.547973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.548430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.548459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.548890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.548920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.549282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.549314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.549745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.549775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.550207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.550237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.550641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.550672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.551080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.551113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.551542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.551574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.551992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.552021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.552442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.552472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.552894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.552923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.553348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.553380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.553795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.553825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.554230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.554260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.554745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.554777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.555160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.555190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.555603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.555634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.556049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.556079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.556494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.910 [2024-06-11 09:44:04.556525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.910 qpair failed and we were unable to recover it. 00:29:32.910 [2024-06-11 09:44:04.556954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.556983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.557295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.557334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.557639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.557670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.558066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.558096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.558555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.558585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.558995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.559027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.559441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.559473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.559886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.559917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.560340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.560370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.560786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.560815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.561224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.561253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.561622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.561655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.562065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.562095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.562425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.562455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.562858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.562888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.563201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.563229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.563628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.563660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.564078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.564107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.564527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.564564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.564970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.564999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.565417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.565449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.565862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.565892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.566310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.566350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.566779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.566810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.567240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.567269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.567660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.567691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.568108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.568138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.568588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.568619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.569041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.569071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.569535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.569567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.570018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.570048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.570549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.570654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.911 [2024-06-11 09:44:04.571200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.911 [2024-06-11 09:44:04.571239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.911 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.571651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.571683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.572101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.572130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.572548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.572580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.573000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.573030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.573451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.573482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.573908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.573939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.574429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.574460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.574871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.574902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.575313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.575356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.575786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.575817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.576219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.576248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.576669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.576700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.577106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.577136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.577560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.577591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.577996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.578025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.578449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.578481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.578943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.578973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.579466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.579496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.579926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.579955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.580375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.580407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.580709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.580743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.581157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.581188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.581617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.581648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.582110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.582140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.582552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.582583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.582881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.582920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.583336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.583368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.583796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.583826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.584118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.584151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.584599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.584630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.585036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.585066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.585482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.585512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.585928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.585958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.586388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.586419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.586720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.586748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.587170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.587199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.587635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.587665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.912 [2024-06-11 09:44:04.588059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.912 [2024-06-11 09:44:04.588089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.912 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.588504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.588536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.588997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.589028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.589336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.589369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.589777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.589807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.590287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.590327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.590739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.590769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.591063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.591094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.591531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.591561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.591970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.591999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.592428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.592459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.592906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.592936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.593371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.593401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.593813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.593844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.594274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.594305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.594739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.594770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.595183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.595212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.595627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.595658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.596071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.596101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.596526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.596558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.596921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.596951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.597355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.597387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.597816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.597845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.598253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.598283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.598718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.598751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.599042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.599071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.599371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.599403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.599825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.599855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.600268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.600305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.600819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.600850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.601282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.601338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.601663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.601696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.602123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.602153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.602557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.602589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.603003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.603033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.603594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.603698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.604220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.604259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.604704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.604737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.605157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.605187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.605604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.605634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.606056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.606088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.913 [2024-06-11 09:44:04.606503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.913 [2024-06-11 09:44:04.606535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.913 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.606974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.607004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.607410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.607442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.607862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.607892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.608311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.608353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.608794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.608823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.609238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.609268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.609703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.609735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.610157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.610188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.610619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.610650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.611040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.611070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.611451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.611482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.611885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.611915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.612338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.612369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.612777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.612807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.613228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.613257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.613669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.613700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.614125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.614154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.614605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.614636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.615056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.615086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.615497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.615526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.615942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.615972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.616378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.616410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.616813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.616843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.617207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.617237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.617624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.617654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.618071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.618101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.618512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.618550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.618958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.618987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.619414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.619444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.619862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.619892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.620332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.620363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.620816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.620847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.621156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.621187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.621606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.621638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.622072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.622101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.622511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.622542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.622957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.622986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.623398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.623429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.623855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.623884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.624295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.624336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.624672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.914 [2024-06-11 09:44:04.624702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.914 qpair failed and we were unable to recover it. 00:29:32.914 [2024-06-11 09:44:04.625100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.625131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.625546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.625578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.625982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.626013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.626436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.626468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.626764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.626792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.627220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.627249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.627574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.627606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.628000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.628029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.628443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.628474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.628929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.628958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.629380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.629410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.629837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.629866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.630278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.630309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.630733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.630764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.631173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.631203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.631605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.631637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.632010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.632040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.632461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.632492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.632901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.632930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.633304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.633354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.633804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.633834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.634197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.634227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.634660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.634690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.635107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.635137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.635514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.635546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.635868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.635900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.636310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.636351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.636808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.636837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.637256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.637286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.637714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.637746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.638155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.638185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.638607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.638637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.639046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.639078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.639463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.639494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.639789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.639820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.640243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.640273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.640512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.640542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.640910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.640940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.641433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.915 [2024-06-11 09:44:04.641464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.915 qpair failed and we were unable to recover it. 00:29:32.915 [2024-06-11 09:44:04.641930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.641959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.642374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.642405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.642840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.642870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.643229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.643260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.643555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.643593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.644022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.644052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.644475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.644506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.644915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.644944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.645372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.645402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.645785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.645816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.646234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.646264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.646676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.646706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.647016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.647044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.647470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.647505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.647922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.647951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.648351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.648382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.648818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.648848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.649346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.649378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.649793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.649823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.650229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.650259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.650692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.650722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.651134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.651167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.651585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.651615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.652028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.652056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.652492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.652523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.652932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.652961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.653387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.653417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.653816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.653847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.654272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.654301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.654716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.654746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.655161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.655192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.655615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.655646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.656060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.656089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.656509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.656540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.656963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.656992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.657416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.657446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.657858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.657888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.658303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.658343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.658755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.658785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.659204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.659234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.916 [2024-06-11 09:44:04.659658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.916 [2024-06-11 09:44:04.659689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.916 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.660102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.660132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.660548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.660579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.660872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.660905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.661350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.661381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.661799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.661827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.662244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.662273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.662683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.662715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.663125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.663154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.663583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.663613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.664041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.664071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.664450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.664481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.664883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.664912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.665337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.665374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.665656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.665688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.666147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.666176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.666578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.666608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.666898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.666929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.667344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.667375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.667785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.667814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.668241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.668270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.668672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.668703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.669136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.669165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.669475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.669504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.669934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.669963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.670373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.670403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.670836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.670865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.671164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.671192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.671584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.671614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.672017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.672048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.672468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.672498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.672906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.672935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.673334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.673365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.673772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.673802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.674225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.674255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.674668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.674700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.675116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.675146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.675560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.675590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.676008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.676038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.676330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.676362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.676657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.676687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.677093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.677122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.677546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.677576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.677941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.677970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.678273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.678300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.917 qpair failed and we were unable to recover it. 00:29:32.917 [2024-06-11 09:44:04.678718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.917 [2024-06-11 09:44:04.678748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.679181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.679211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.679616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.679647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.680069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.680099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.680511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.680541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.680974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.681003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.681298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.681336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.681675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.681705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.682119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.682154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.682576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.682606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.683038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.683067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.683383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.683417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.683826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.683857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.684283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.684323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.684781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.684812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.685239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.685269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.685678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.685708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.686124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.686155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.686560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.686590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.687013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.687043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.687353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.687388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.687794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.687823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.688239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.688269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.688701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.688732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.689147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.689177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.689602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.689632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.690040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.690069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.690444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.690475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.690909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.690939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.691364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.691394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.691813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.691841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.692145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.692176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.692627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.692658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.693079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.693107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.693520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.693550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.693969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.693999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.694412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.694443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.694870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.694899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.695313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.695362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.695801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.695830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.696198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.696228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.696613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.696645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.697049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.697079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.697497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.918 [2024-06-11 09:44:04.697527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.918 qpair failed and we were unable to recover it. 00:29:32.918 [2024-06-11 09:44:04.697943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.697973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.698387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.698418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.698822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.698851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.699266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.699295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.699633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.699670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.700081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.700110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.700482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.700515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.700916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.700946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.701361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.701395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.701815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.701844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.702256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.702287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.702681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.702714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1331690 Killed "${NVMF_APP[@]}" "$@" 00:29:32.919 [2024-06-11 09:44:04.703164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.703195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 09:44:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:32.919 [2024-06-11 09:44:04.703702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.703732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 09:44:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:32.919 [2024-06-11 09:44:04.704089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.704119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 09:44:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:32.919 [2024-06-11 09:44:04.704456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.704487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 09:44:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:32.919 09:44:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.919 [2024-06-11 09:44:04.704816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.704854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.705275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.705305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.705754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.705784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.706208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.706239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.706627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.706657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.707013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.707042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.707454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.707484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:32.919 [2024-06-11 09:44:04.707916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.919 [2024-06-11 09:44:04.707946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:32.919 qpair failed and we were unable to recover it. 00:29:33.190 [2024-06-11 09:44:04.708436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.190 [2024-06-11 09:44:04.708469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.190 qpair failed and we were unable to recover it. 00:29:33.190 [2024-06-11 09:44:04.708874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.190 [2024-06-11 09:44:04.708907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.190 qpair failed and we were unable to recover it. 00:29:33.190 [2024-06-11 09:44:04.709388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.190 [2024-06-11 09:44:04.709420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.190 qpair failed and we were unable to recover it. 00:29:33.190 [2024-06-11 09:44:04.709730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.190 [2024-06-11 09:44:04.709758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.190 qpair failed and we were unable to recover it. 00:29:33.190 [2024-06-11 09:44:04.710167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.190 [2024-06-11 09:44:04.710204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.190 qpair failed and we were unable to recover it. 00:29:33.190 [2024-06-11 09:44:04.710642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.190 [2024-06-11 09:44:04.710676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.190 qpair failed and we were unable to recover it. 00:29:33.190 [2024-06-11 09:44:04.711071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.190 [2024-06-11 09:44:04.711101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.190 qpair failed and we were unable to recover it. 00:29:33.190 [2024-06-11 09:44:04.711514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.190 [2024-06-11 09:44:04.711545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.190 qpair failed and we were unable to recover it. 00:29:33.190 [2024-06-11 09:44:04.711845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.190 [2024-06-11 09:44:04.711876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.190 qpair failed and we were unable to recover it. 00:29:33.190 [2024-06-11 09:44:04.712290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.190 [2024-06-11 09:44:04.712333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 09:44:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1332620 00:29:33.191 [2024-06-11 09:44:04.712753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.712784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 09:44:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1332620 00:29:33.191 [2024-06-11 09:44:04.713163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.713196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 09:44:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1332620 ']' 00:29:33.191 09:44:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:33.191 09:44:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.191 [2024-06-11 09:44:04.713630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.713661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 09:44:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:33.191 09:44:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.191 [2024-06-11 09:44:04.714086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.714116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 09:44:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:33.191 09:44:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.191 [2024-06-11 09:44:04.714551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.714584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.715027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.715058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.715453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.715484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.715895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.715925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.716341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.716373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.716690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.716724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.717136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.717166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.717571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.717603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.718025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.718055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.718487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.718518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.718912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.718943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.719365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.719396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.719822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.719853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.720274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.720305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.720721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.720754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.721185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.721215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.721590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.721622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.721992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.722026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.722342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.722373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.722773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.722804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.723210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.723241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.723668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.723699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.724121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.724152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.724645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.724677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.725051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.725080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.725385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.725418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.725870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.725901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.726333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.726363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.726793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.726823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.727189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.727219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.727662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.727692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.728115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.728144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.728556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.728586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.729019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.729049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.729456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.729487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.729933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.729962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.730382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.730412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.730786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.730817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.731094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.731127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.731486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.731524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.731928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.731956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.732225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.732253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.732441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.732475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.732957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.732986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.733404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.733435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.733807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.733838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.734139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.734171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.734605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.734634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.735045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.735076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.735584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.735616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.735991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.736022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.736468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.736500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.736927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.736958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.737341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.737371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.737803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.737832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.738150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.738182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.738576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.738606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.739028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.739058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.739477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.191 [2024-06-11 09:44:04.739507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.191 qpair failed and we were unable to recover it. 00:29:33.191 [2024-06-11 09:44:04.739941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.739970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.740275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.740307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.740805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.740834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.741246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.741277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.741680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.741712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.742151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.742181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.742465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.742495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.742907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.742939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.743357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.743388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.743700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.743732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.744169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.744199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.744507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.744537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.744959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.744990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.745365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.745397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.745822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.745853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.746263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.746293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.746721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.746751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.747112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.747143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.747592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.747626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.748032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.748063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.748481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.748518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.748920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.748951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.749367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.749399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.749877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.749908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.750339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.750369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.750674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.750706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.751134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.751169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.751611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.751645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.751947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.751975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.752394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.752425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.752712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.752740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.753149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.753179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.753599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.753629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.754041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.754072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.754395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.754425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.754880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.754909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.755289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.755346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.755829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.755858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.756292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.756335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.756767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.756798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.757218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.757249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.757683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.757713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.758094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.758124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.758595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.758626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.759050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.759080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.759494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.759525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.759742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.759772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.760170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.760202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.760600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.760631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.760933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.760967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.761389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.761419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.761873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.761904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.762258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.762288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.762735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.762767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.763194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.763224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.763655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.763686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.764110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.764140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.764503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.764536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.764953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.764981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.765392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.765423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.765829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.765863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.766277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.766308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.766626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.766656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.766922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.766949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.767342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.767372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.767784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.767813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.768228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.768258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.768541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.768572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.192 [2024-06-11 09:44:04.768851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.192 [2024-06-11 09:44:04.768880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.192 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.769293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.769336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.769610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.769638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.769972] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:29:33.193 [2024-06-11 09:44:04.770039] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.193 [2024-06-11 09:44:04.770046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.770074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.770509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.770538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.770975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.771004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.771434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.771464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.771729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.771760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.772190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.772220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.772721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.772752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.773175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.773206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.773506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.773540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.773971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.774001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.774408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.774439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.774875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.774905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.775335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.775366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.775821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.775851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.776134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.776164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.776560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.776592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.777022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.777053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.777485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.777516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.777945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.777975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.778402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.778434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.778874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.778903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.779337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.779369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.779784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.779814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.780245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.780275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.780705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.780736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.781167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.781200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.781506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.781537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.781957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.781986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.782403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.782441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.782859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.782890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.783292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.783333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.783513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.783542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.783952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.783982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.784279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.784310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.784708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.784739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.785174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.785205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.785641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.785672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.786017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.786046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.786367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.786399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.786698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.786732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.787178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.787207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.787465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.787493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.787864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.787895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.788237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.788266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.788638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.788669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.789096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.789126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.789552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.789583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.790040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.790070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.790494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.790524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.790950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.790981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.791398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.791428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.791854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.791883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.792297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.792340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.792772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.792802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.793222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.793252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.793558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.793591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.193 [2024-06-11 09:44:04.794004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.193 [2024-06-11 09:44:04.794035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.193 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.794457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.794488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.794983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.795013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.795429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.795459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.795740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.795771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.796051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.796082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.796491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.796522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.796918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.796947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.797376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.797407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.797840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.797870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.798291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.798335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.798755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.798786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.799201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.799244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.799641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.799672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.799959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.799987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.800355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.800384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.800790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.800818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.801241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.801283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.801748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.801780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.802070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.802101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.802507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.802539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.802984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.803013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.803408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.803440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.803863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.803895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.804307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.804348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.804776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.804804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.805217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.805247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.805566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.805598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.805898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.805928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.194 [2024-06-11 09:44:04.806330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.806362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.806671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.806703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.807141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.807171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.807565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.807596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.808027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.808058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.808470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.808501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.808932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.808961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.809448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.809479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.809903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.809933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.810366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.810397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.810840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.810871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.811273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.811304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.811737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.811768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.812140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.812171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.812561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.812592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.812995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.813024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.813336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.813369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.813779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.813809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.814229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.814258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.814645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.814677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.815036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.815065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.815498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.815528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.815920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.815950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.816242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.816282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.816598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.816632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.817094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.817124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.817535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.817566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.817977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.818008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.818457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.818488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.818923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.818953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.819335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.819366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.819804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.819834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.820261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.820291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.820730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.820763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.821178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.821207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.821515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.821547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.821925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.821956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.822371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.822403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.822844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.822875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.823288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.823330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.194 [2024-06-11 09:44:04.823798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.194 [2024-06-11 09:44:04.823828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.194 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.824231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.824261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.824665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.824696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.825181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.825211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.825663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.825696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.826141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.826171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.826645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.826676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.827100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.827131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.827514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.827544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.827942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.827972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.828401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.828435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.828873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.828904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.829338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.829370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.829791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.829821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.830239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.830270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.830723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.830755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.831176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.831205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.831631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.831661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.832086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.832118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.832404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.832438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.832865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.832899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.833263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.833292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.833714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.833746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.834159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.834197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.834443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.834475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.834872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.834902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.835363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.835394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.835836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.835866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.836309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.836350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.836792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.836822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.837243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.837273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.837722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.837753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.838175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.838204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.838631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.838662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.839090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.839121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.839364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.839398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.839673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.839704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.840140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.840171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.840573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.840604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.841007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.841038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.841470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.841501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.842035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.842065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.842453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.842484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.842900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.842930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.843350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.843381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.843796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.843827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.844243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.844273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.844718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.844749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.844994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.845024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.845428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.845459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.845861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.845890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.846289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.846331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.846788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.846819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.847223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.847254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.847664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.847695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.848107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.848136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.848401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.848433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.848839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.848868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.849326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.849357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.849782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.849813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.850212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.850242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.850690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.850721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.851153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.195 [2024-06-11 09:44:04.851185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.195 qpair failed and we were unable to recover it. 00:29:33.195 [2024-06-11 09:44:04.851520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.851560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.851879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.851909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.852336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.852369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.852678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.852712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.853064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.853094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.853473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.853504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.853788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.853818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.854196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.854225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.854678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.854708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.855154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.855185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.855583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.855615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.856055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.856085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.856515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.856545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.856854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.856884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.857222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.857253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.857678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.857710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.858166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.858198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.858618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.858648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.858956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.858987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.859289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.196 [2024-06-11 09:44:04.859370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.859399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.859764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.859795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.860188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.860219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.860714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.860745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.861175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.861205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.861481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.861511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.861939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.861970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.862232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.862265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.862730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.862761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.863108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.863138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.863522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.863553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.863968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.863999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.864190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.864218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.864634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.864664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.865094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.865124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.865414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.865447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.865972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.866002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.866418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.866449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.866650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.866681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.867162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.867192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.867613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.867643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.868025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.868055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.868388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.868418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.868839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.868868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.869332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.869363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.869688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.869717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.870026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.870056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.870552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.870583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.871018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.871047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.871331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.871362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.871825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.871855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.872275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.872304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.872746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.872776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.873195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.873224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.873642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.873680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.874134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.874165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.874465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.874496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.874814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.874844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.875149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.875178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.875497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.875526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.875924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.875953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.876255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.876284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.876615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.876645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.877081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.877113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.877442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.877472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.877869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.877898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.878334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.878364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.878802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.878831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.879251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.879281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.879709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.196 [2024-06-11 09:44:04.879742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.196 qpair failed and we were unable to recover it. 00:29:33.196 [2024-06-11 09:44:04.880166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.880195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.880689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.880720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.881137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.881167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.881557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.881589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.882007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.882037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.882448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.882478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.882918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.882948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.883241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.883274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.883762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.883793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.884070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.884103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.884501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.884532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.884951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.884981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.885274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.885305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.885775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.885808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.886218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.886248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.886679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.886712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.887140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.887171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.887610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.887641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.887943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.887974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.888400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.888430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.888792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.888823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.889193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.889222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.889626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.889658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.890075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.890104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.890508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.890545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.890970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.891000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.891424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.891454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.891892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.891922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.892343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.892378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.892823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.892853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.893288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.893339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.893671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.893704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.894157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.894186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.894609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.894640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.894916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.894948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.895382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.895412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.895725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.895755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.896112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.896141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.896560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.896590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.897013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.897046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.897457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.897487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.897916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.897947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.898383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.898414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.898828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.898859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.899274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.899304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.899738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.899769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.900186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.900217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.900642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.900678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.901097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.901127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.901552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.901585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.902001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.902032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.902445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.902477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.902737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.902767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.903165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.903195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.903609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.903641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.904059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.904089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.904498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.904530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.904867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.904898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.905167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.905197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.905587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.905619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.906041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.906071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.906493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.906525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.906816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.906849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.907291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.907334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.907751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.907788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.908197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.908226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.908605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.908635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.909044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.909073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.909382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.197 [2024-06-11 09:44:04.909413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.197 qpair failed and we were unable to recover it. 00:29:33.197 [2024-06-11 09:44:04.909857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.909890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.910301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.910344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.910761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.910792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.911161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.911192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.911576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.911609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.912045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.912076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.912507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.912540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.912933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.912962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.913408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.913440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.913865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.913896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.914279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.914309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.914740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.914770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.915021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.915050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.915452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.915483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.915912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.915942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.916387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.916417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.916840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.916870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.917273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.917302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.917787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.917816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.918219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.918249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.918662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.918693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.919153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.919182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.919580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.919612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.920038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.920068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.920484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.920516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.920864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.920893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.921294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.921340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.921655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.921685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.922100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.922131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.922523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.922554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.922978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.923008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.923397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.923428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.923855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.923885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.924299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.924338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.924753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.924783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.925190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.925227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.925601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.925632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.926048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.926077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.926535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.926566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.926946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.926975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.927406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.927437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.927842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.927871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.928291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.928330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.928631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.928663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.929097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.929127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.929545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.929575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.929952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.929982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.930384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.930414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.930837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.930868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.931256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.931286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.931611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.931642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.932076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.932105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.932532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.932564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.932936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.932967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.933406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.933438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.933888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.933918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.934307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.934348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.934778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.934807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.935227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.935258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.935581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.935615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.936029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.936059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.936486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.936518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.936815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.936846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.937155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.937188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.198 qpair failed and we were unable to recover it. 00:29:33.198 [2024-06-11 09:44:04.937669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.198 [2024-06-11 09:44:04.937701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.938129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.938158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.938584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.938615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.939035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.939065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.939480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.939510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.939931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.939961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.940372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.940403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.940792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.940821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.941191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.941220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.941608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.941640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.942009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.942038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.942447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.942484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.942923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.942954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.943371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.943402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.943860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.943889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.944174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.944204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.944633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.944664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.945119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.945150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.945521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.945551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.945863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.945892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.946300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.946341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.946773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.946803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.947230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.947262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.947657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.947688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.947983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.948013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.948400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.948431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.948818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.948848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.949271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.949302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.949652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.949685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.950124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.950153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.950670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.950700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.951131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.951160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.951600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.951632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.952009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.952039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.952418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.952451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.952840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.952868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.953288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.953330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.953776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.953806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.954224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.954255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.954531] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.199 [2024-06-11 09:44:04.954585] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.199 [2024-06-11 09:44:04.954593] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.199 [2024-06-11 09:44:04.954600] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.199 [2024-06-11 09:44:04.954606] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.199 [2024-06-11 09:44:04.954679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.954708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.954780] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:29:33.199 [2024-06-11 09:44:04.954915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:29:33.199 [2024-06-11 09:44:04.955072] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:29:33.199 [2024-06-11 09:44:04.955131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.955159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.955072] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:29:33.199 [2024-06-11 09:44:04.955594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.955624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.956049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.956081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.956509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.956539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.956955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.956984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.957331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.957361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.957817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.957846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.958160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.958190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.958632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.958670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.959002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.959032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.959463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.959494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.959935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.959965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.960368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.960399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.960834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.960865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.961275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.961306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.961755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.961785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.962204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.962233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.962659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.962690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.963128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.963159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.963599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.963630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.964043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.964072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.964383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.964414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.964856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.964885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.965336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.965367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.965786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.965815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.966240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.966270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.966691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.966721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.967110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.967140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.967451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.199 [2024-06-11 09:44:04.967482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.199 qpair failed and we were unable to recover it. 00:29:33.199 [2024-06-11 09:44:04.967912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.967941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.968361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.968394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.968693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.968723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.969040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.969069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.969505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.969535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.969809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.969838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.970280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.970312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.970751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.970780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.971069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.971101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.971390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.971421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.971834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.971864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.972150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.972179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.972600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.972633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.973106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.973136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.973565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.973595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.974021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.974052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.974486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.974517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.974819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.974851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.975294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.975336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.975639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.975679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.976098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.976129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.976547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.976579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.977016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.977045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.977308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.977351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.977812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.977848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.978261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.978291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.978722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.978753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.979169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.979199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.979538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.979569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.979991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.980022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.980449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.980480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.980918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.980948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.981337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.981368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.981788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.981819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.982044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.982073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.982460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.982492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.982894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.982923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.983199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.983228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.983587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.983618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.984077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.984110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.984424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.984454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.984879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.984909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.985140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.985169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.985465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.985495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.985939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.985968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.986251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.986284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.986739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.986770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.987186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.987215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.987644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.987674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.988029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.988059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.988475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.988507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.988932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.988962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.989224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.989254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.989689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.989719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.990151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.990182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.990574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.990604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.990968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.990997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.991416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.991447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.991753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.991783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.992199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.992236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.992487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.992517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.992930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.992960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.993388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.200 [2024-06-11 09:44:04.993420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.200 qpair failed and we were unable to recover it. 00:29:33.200 [2024-06-11 09:44:04.993844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.201 [2024-06-11 09:44:04.993874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.201 qpair failed and we were unable to recover it. 00:29:33.201 [2024-06-11 09:44:04.994197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.201 [2024-06-11 09:44:04.994226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.201 qpair failed and we were unable to recover it. 00:29:33.201 [2024-06-11 09:44:04.994489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.201 [2024-06-11 09:44:04.994519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.201 qpair failed and we were unable to recover it. 00:29:33.201 [2024-06-11 09:44:04.994942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.201 [2024-06-11 09:44:04.994971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.201 qpair failed and we were unable to recover it. 00:29:33.201 [2024-06-11 09:44:04.995384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.201 [2024-06-11 09:44:04.995416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.201 qpair failed and we were unable to recover it. 00:29:33.201 [2024-06-11 09:44:04.995848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.201 [2024-06-11 09:44:04.995878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.201 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:04.996193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:04.996225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:04.996496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:04.996527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:04.996989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:04.997019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:04.997445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:04.997476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:04.997891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:04.997921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:04.998356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:04.998387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:04.998649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:04.998680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:04.999105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:04.999135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:04.999640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:04.999670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.000099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.000128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.000433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.000469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.000770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.000800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.001285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.001327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.001740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.001773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.002015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.002044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.002526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.002557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.002973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.003002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.003419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.003449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.003680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.003713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.004049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.004078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.004493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.004524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.004772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.004802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.005211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.005240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.005671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.473 [2024-06-11 09:44:05.005702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.473 qpair failed and we were unable to recover it. 00:29:33.473 [2024-06-11 09:44:05.006016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.006046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.006491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.006522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.006944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.006972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.007412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.007442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.007868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.007898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.008363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.008396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.008820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.008857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.009146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.009178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.009597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.009627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.009927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.009955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.010443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.010474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.010882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.010912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.011335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.011365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.011803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.011832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.012248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.012279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.012617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.012648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.013060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.013090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.013515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.013546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.013961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.013991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.014253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.014284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.014752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.014782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.015210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.015239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.015654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.015685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.016111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.016140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.016567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.016597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.017045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.017077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.017486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.017518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.017807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.017838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.018231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.018260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.018695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.018726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.019139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.019170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.019594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.019625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.019881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.019910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.020349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.020379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.020796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.020825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.021257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.021288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.021675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.021706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.022130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.474 [2024-06-11 09:44:05.022159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.474 qpair failed and we were unable to recover it. 00:29:33.474 [2024-06-11 09:44:05.022462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.022497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.022919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.022947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.023371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.023402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.023832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.023861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.024274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.024303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.024530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.024560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.024809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.024839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.025283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.025313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.025631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.025667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.025909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.025939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.026347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.026378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.026819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.026849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.027089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.027118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.027383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.027413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.027836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.027864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.028334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.028364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.028624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.028653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.029077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.029107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.029364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.029394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.029785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.029815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.030231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.030260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.030710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.030741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.031147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.031177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.031598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.031629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.032046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.032075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.032503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.032535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.032775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.032804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.033086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.033115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.033362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.033392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.033773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.033803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.034065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.034094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.034499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.034529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.034941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.034970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.035215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.035245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.035720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.035752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.036146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.036177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.036524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.036554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.036964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.036993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.475 qpair failed and we were unable to recover it. 00:29:33.475 [2024-06-11 09:44:05.037409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.475 [2024-06-11 09:44:05.037439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.037698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.037728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.038093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.038123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.038421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.038456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.038875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.038904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.039338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.039368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.039601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.039632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.039852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.039880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.040155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.040187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.040609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.040640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.041078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.041114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.041416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.041447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.041850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.041879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.042257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.042286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.042680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.042712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.043127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.043157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.043424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.043455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.043738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.043767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.044127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.044157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.044500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.044531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.044938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.044966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.045392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.045423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.045584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.045614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.046045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.046075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.046501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.046531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.046905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.046934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.047413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.047444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.047691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.047719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.048128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.048157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.048564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.048595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.048815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.048845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.049287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.049327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.049592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.049621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.050043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.050072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.050497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.050528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.050964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.050993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.051371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.051402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.051686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.051729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.476 [2024-06-11 09:44:05.052142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.476 [2024-06-11 09:44:05.052171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.476 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.052600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.052631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.053046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.053076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.053466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.053497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.053904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.053934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.054233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.054266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.054723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.054756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.055059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.055092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.055496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.055526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.055776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.055805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.056208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.056239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.056632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.056663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.057088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.057168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.057581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.057612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.058020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.058050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.058474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.058506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.058739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.058769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.059058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.059087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.059502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.059532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.059902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.059931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.060352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.060383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.060817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.060847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.061222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.061254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.061660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.061691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.062107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.062137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.062567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.062597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.063009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.063039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.063465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.063496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.063753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.063784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.064220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.064249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.477 [2024-06-11 09:44:05.064505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.477 [2024-06-11 09:44:05.064536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.477 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.064985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.065015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.065434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.065466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.065711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.065741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.066154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.066183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.066604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.066634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.067082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.067111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.067363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.067394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.067813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.067842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.068276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.068305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.068751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.068781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.069078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.069110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.069515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.069547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.069991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.070020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.070435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.070465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.070946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.070976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.071396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.071426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.071860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.071889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.072308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.072358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.072621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.072649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.073045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.073074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.073497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.073529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.073946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.073982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.074396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.074428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.074801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.074833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.075200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.075229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.075497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.075527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.075926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.075955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.076205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.076235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.076479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.076509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.076918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.076948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.077391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.077422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.077671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.077700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.078154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.078183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.078450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.078481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.078913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.078944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.079333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.079364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.079815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.079844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.478 [2024-06-11 09:44:05.080259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.478 [2024-06-11 09:44:05.080287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.478 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.080572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.080606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.080853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.080883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.081268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.081299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.081621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.081654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.082086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.082116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.082537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.082568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.082998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.083027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.083153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.083181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.083578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.083608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.084043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.084073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.084468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.084500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.084911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.084942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.085369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.085399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.085857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.085887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.086271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.086301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.086608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.086638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.087061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.087091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.087520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.087550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.087850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.087881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.088340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.088371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.088602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.088632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.089042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.089072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.089510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.089541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.089958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.089996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.090432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.090462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.090740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.090770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.091234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.091265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.091516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.091547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.091975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.092004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.092419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.092450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.092885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.092915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.093136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.093165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.093552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.093582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.094004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.094034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.094239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.094270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.094809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.094840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.095273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.095304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.095740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.479 [2024-06-11 09:44:05.095772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.479 qpair failed and we were unable to recover it. 00:29:33.479 [2024-06-11 09:44:05.096221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.096251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.096503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.096534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.096931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.096960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.097197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.097227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.097684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.097715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.098065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.098095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.098514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.098544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.098964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.098994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.099430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.099461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.099895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.099926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.100353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.100383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.100816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.100846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.101117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.101153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.101556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.101586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.102002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.102034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.102455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.102487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.102753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.102782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.103185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.103215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.103457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.103488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.103862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.103891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.104362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.104394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.104812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.104842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.105069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.105099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.105511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.105540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.105924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.105952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.106351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.106389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.106844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.106874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.107296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.107349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.107765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.107795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.108219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.108248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.108619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.108650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.109088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.109118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.109563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.109593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.110017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.110046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.110477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.110506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.110926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.110955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.111382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.111413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.111830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.111860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.480 qpair failed and we were unable to recover it. 00:29:33.480 [2024-06-11 09:44:05.112276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.480 [2024-06-11 09:44:05.112305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.112768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.112798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.113241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.113272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.113727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.113759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.114185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.114215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.114451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.114482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.114891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.114920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.115171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.115200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.115596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.115627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.116050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.116079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.116468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.116498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.116900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.116929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.117195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.117224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.117602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.117632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.118051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.118087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.118491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.118521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.118770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.118799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.119094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.119123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.119378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.119409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.119801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.119832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.120207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.120238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.120652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.120683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.121120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.121149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.121461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.121492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.121918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.121947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.122362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.122394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.122816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.122846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.123267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.123296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.123711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.123741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.124160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.124188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.124597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.124629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.124884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.124913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.125348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.125379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.125639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.125668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.126088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.126117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.126535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.126566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.126938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.126967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.127341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.127373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.481 [2024-06-11 09:44:05.127806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.481 [2024-06-11 09:44:05.127835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.481 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.128240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.128270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.128715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.128746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.129160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.129190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.129595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.129627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.130038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.130067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.130381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.130416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.130860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.130890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.131326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.131356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.131630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.131663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.132080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.132110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.132543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.132573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.132975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.133005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.133469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.133500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.133810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.133843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.134273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.134304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.134651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.134695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.135101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.135131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.135596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.135626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.135882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.135912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.136337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.136367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.136801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.136832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.137274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.137304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.137732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.137763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.138188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.138218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.138622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.138653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.138945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.138976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.139207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.139238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.139693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.139723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.140144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.140172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.140663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.140693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.141119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.141149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.141592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.482 [2024-06-11 09:44:05.141622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.482 qpair failed and we were unable to recover it. 00:29:33.482 [2024-06-11 09:44:05.141734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.141760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.142230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.142259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.142697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.142726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.143154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.143183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.143445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.143478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.143879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.143908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.144339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.144369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.144605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.144634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.145066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.145096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.145538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.145569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.145982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.146011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.146407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.146439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.146569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.146598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.146993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.147022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.147251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.147281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.147567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.147598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.147836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.147865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.148308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.148351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.148764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.148793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.149199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.149228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.149659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.149690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.150122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.150152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.150400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.150431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.150678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.150712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.151128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.151157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.151582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.151613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.152037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.152069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.152497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.152527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.152907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.152937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.153378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.153409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.153841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.153870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.154302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.154343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.154742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.154772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.155190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.155220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.155638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.155670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.155882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.155912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.156172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.156202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.156610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.156640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.483 qpair failed and we were unable to recover it. 00:29:33.483 [2024-06-11 09:44:05.157054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.483 [2024-06-11 09:44:05.157083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.157341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.157371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.157778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.157807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.158235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.158266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.158719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.158750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.159205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.159233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.159547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.159579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.160003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.160033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.160443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.160474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.160781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.160811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.161237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.161267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.161429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.161458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.161774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.161807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.162061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.162089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.162497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.162528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.162955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.162985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.163402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.163433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.163868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.163897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.164325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.164356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.164679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.164710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.165083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.165115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.165544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.165575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.165986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.166016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.166445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.166475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.166726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.166757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.167172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.167208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.167634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.167666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.168098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.168128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.168511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.168542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.168783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.168813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.169221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.169251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.169680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.169710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.170113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.170142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.170568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.170598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.171006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.171036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.171348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.171380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.171808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.171838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.172285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.172327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.484 [2024-06-11 09:44:05.172747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.484 [2024-06-11 09:44:05.172777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.484 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.173088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.173118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.173542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.173572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.174001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.174031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.174447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.174478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.174754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.174784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.175204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.175233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.175637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.175667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.175837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.175871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.176306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.176348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.176768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.176798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.177229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.177259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.177766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.177796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.178201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.178232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.178608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.178640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.178876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.178905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.179214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.179244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.179496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.179527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.179935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.179964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.180149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.180181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.180602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.180632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.181072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.181102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.181507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.181537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.181816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.181844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.182292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.182345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.182644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.182676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.183082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.183113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.183557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.183595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.184008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.184037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.184277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.184307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.184758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.184787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.185248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.185277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.185694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.185726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.185962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.185992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.186402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.186433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.186828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.186857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.187111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.187141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.187571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.187601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.188066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.485 [2024-06-11 09:44:05.188096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.485 qpair failed and we were unable to recover it. 00:29:33.485 [2024-06-11 09:44:05.188492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.188523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.188986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.189015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.189483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.189514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.189816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.189845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.190101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.190133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.190553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.190584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.191010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.191039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.191459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.191489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.191848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.191878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.192300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.192341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.192763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.192794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.193226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.193255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.193524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.193554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.193971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.194001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.194410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.194441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.194689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.194718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.195181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.195210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.195626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.195656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.196089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.196118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.196423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.196454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.196699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.196729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.197171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.197200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.197599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.197629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.197931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.197960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.198389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.198420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.198840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.198869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.199330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.199361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.199787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.199818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.200239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.200274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.200717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.200748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.201133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.201164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.201589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.201619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.202040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.202072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.202474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.202505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.202746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.202776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.203186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.203216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.203612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.486 [2024-06-11 09:44:05.203642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.486 qpair failed and we were unable to recover it. 00:29:33.486 [2024-06-11 09:44:05.203889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.203918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.204326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.204356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.204828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.204856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.205247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.205277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.205732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.205762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.206200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.206229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.206654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.206684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.207110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.207139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.207553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.207585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.208015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.208046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.208461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.208492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.208937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.208966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.209373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.209403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.209841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.209870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.210292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.210335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.210596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.210625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.211029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.211058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.211491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.211522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.211937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.211966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.212234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.212267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.212608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.212639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.213049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.213078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.213513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.213543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.213973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.214001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.214521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.214553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.214996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.215026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.215406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.215436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.215854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.215883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.216141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.216170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.216487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.216519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.216960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.216990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.217422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.217464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.217839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.217867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.218051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.487 [2024-06-11 09:44:05.218081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.487 qpair failed and we were unable to recover it. 00:29:33.487 [2024-06-11 09:44:05.218532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.218561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.218993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.219022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.219465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.219496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.219918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.219947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.220362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.220392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.220827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.220856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.221275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.221305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.221552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.221582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.221991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.222021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.222439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.222468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.222904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.222933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.223181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.223210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.223624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.223655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.224047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.224076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.224514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.224545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.224990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.225018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.225442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.225472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.225896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.225926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.226349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.226379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.226815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.226844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.227263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.227293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.227727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.227757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.228118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.228149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.228575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.228606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.229035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.229065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.229298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.229340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.229761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.229790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.230219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.230250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.230497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.230527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.230960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.230989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.231350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.231380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.231808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.231837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.232257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.232287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.232716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.232746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.233158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.233188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.233603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.233634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.233881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.233910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.234351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.234388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.488 qpair failed and we were unable to recover it. 00:29:33.488 [2024-06-11 09:44:05.234797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.488 [2024-06-11 09:44:05.234826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.235081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.235110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.235456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.235487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.235797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.235826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.236244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.236274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.236673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.236703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.237109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.237139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.237413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.237444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.237681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.237711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.238157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.238185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.238495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.238525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.238952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.238980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.239392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.239423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.239732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.239762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.240171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.240200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.240553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.240584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.240993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.241022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.241495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.241526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.241760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.241789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.242252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.242281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.242550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.242580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.242875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.242906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.243355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.243386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.243805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.243835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.244095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.244124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.244500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.244530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.244955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.244985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.245222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.245251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.245682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.245712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.245976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.246005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.246431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.246462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.246883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.246913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.247338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.247369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.247806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.247834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.248343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.248374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.248623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.248652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.249063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.249092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.249526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.249556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.489 [2024-06-11 09:44:05.249963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.489 [2024-06-11 09:44:05.249992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.489 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.250419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.250454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.250872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.250902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.251201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.251230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.251660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.251691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.252121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.252153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.252397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.252427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.252840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.252870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.253286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.253330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.253760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.253789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.254035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.254063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.254494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.254525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.254897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.254926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.255348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.255379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.255794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.255825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.256269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.256300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.256538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.256566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.256858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.256888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.257123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.257153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.257606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.257636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.257871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.257900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.258206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.258235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.258663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.258694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.259088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.259117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.259555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.259585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.260032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.260067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.260477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.260508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.260906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.260934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.261367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.261398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.261786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.261816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.262102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.262132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.262441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.262473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.262903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.262932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.263348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.263379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.263629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.263659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.263908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.263937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.264340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.264370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.264612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.264640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.264916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.490 [2024-06-11 09:44:05.264948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.490 qpair failed and we were unable to recover it. 00:29:33.490 [2024-06-11 09:44:05.265244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.265273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.265747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.265778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.266206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.266242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.266660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.266691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.266985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.267017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.267438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.267469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.267874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.267903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.268330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.268360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.268807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.268835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.269092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.269122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.269518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.269548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.269662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.269688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.270045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.270073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.270486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.270516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.270951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.270981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.271275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.271305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.271745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.271775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.272190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.272219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.272692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.272722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.273133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.273163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.273589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.273620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.273850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.273879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.274337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.274368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.274781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.274810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.274988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.275019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.275459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.275490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.275894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.275925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.276350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.276382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.276621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.276651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.491 [2024-06-11 09:44:05.277093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.491 [2024-06-11 09:44:05.277125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.491 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.277562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.277596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.278007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.278037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.278466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.278498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.278916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.278945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.279193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.279222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.279633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.279664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.279967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.280000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.280429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.280461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.280891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.280921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.281204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.281235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.281469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.281499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.281906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.281935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.282368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.282404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.282815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.282845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.283269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.283299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.283624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.283655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.284081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.284113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.284531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.284562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.285025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.285054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.285474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.285505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.285930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.285959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.286453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.286484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.286770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.286802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.287211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.287240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.287637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.287668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.288086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.288115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.288431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.763 [2024-06-11 09:44:05.288462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.763 qpair failed and we were unable to recover it. 00:29:33.763 [2024-06-11 09:44:05.288924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.288954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.289400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.289430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.289837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.289866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.290292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.290332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.290765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.290794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.291235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.291265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.291729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.291760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.291997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.292027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.292399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.292430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.292543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.292570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.292965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.292994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.293365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.293396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.293667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.293697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.294103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.294133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.294557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.294587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.294824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.294853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.295259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.295289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.295743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.295774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.296209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.296239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.296633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.296664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.297081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.297110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.297517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.297547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.297659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.297688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.298120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.298149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.298403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.298433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.298894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.298937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.299349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.299379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.299821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.299849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.300144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.300172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.300424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.300455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.300963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.300992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.301422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.301452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.301859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.301887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.302330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.302362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.302782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.302811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.303059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.303088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.303505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.303536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.303985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.764 [2024-06-11 09:44:05.304014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.764 qpair failed and we were unable to recover it. 00:29:33.764 [2024-06-11 09:44:05.304437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.304469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.304877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.304906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.305339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.305370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.305652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.305682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.306107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.306136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.306531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.306562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.306966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.306994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.307227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.307257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.307580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.307615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.308057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.308086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.308487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.308517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.308763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.308791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.309207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.309236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.309672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.309703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.310114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.310145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.310575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.310605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.311006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.311035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.311471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.311500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.311960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.311989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.312411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.312443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.312858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.312887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.313125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.313154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.313579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.313608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.314043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.314071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.314332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.314363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.314788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.314818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.315231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.315260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.315690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.315726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.316136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.316164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.316428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.316458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.316881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.316911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.317347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.317377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.317804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.317832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.318264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.318294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.318680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.318710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.319087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.319116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.319538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.319570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.765 [2024-06-11 09:44:05.319805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.765 [2024-06-11 09:44:05.319834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.765 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.320206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.320235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.320617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.320648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.320882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.320911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.321340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.321371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.321799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.321830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.322219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.322247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.322667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.322698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.323122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.323152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.323586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.323615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.324045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.324074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.324492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.324523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.324951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.324980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.325402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.325433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.325916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.325945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.326186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.326214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.326457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.326488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.326905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.326941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.327371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.327401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.327648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.327676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.328118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.328147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.328585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.328616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.329042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.329072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.329564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.329593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.329901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.329934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.330346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.330377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.330768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.330796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.331096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.331126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.331519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.331550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.331967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.331996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.332251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.332280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.332566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.332596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.333019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.333049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.333157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.333184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.333577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.333608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.333719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.333747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.334141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.334170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.334599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.334629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.335062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.766 [2024-06-11 09:44:05.335092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.766 qpair failed and we were unable to recover it. 00:29:33.766 [2024-06-11 09:44:05.335469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.335500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.335808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.335840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.336241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.336272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.336718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.336748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.337161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.337192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.337599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.337633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.337930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.337960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.338357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.338389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.338640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.338671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.339059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.339089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.339464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.339495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.339945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.339973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.340183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.340215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.340617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.340647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.341069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.341099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.341540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.341572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.341983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.342012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.342448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.342478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.342897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.342935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.343360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.343391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.343703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.343736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.344049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.344080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.344364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.344395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.344651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.344682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.345056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.345087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.345508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.345540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.345959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.345989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.346419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.346449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.346836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.346867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.347280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.347311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.767 qpair failed and we were unable to recover it. 00:29:33.767 [2024-06-11 09:44:05.347765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.767 [2024-06-11 09:44:05.347797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.348183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.348215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.348532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.348564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.348798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.348828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.349236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.349264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.349701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.349732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.350152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.350183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.350604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.350636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.351047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.351076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.351370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.351401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.351820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.351850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.352296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.352343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.352618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.352648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.353078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.353107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.353528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.353559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.353994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.354024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.354450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.354481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.354882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.354912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.355176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.355205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.355622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.355653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.356077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.356106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.356513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.356546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.356970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.357001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.357437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.357467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.357746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.357776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.358208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.358237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.358547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.358580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.358815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.358845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.359257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.359295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.359625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.359656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.359902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.359932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.360356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.360386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.360850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.360879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.361308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.361351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.361781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.361811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.362080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.362109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.362412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.362445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.362893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.362923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.768 [2024-06-11 09:44:05.363345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.768 [2024-06-11 09:44:05.363377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.768 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.363824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.363854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.364287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.364335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.364739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.364768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.365010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.365040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.365310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.365353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.365749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.365779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.366209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.366240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.366387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.366445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.366791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.366819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.367244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.367273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.367653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.367684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.368106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.368135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.368559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.368591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.369006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.369034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.369482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.369513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.370001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.370031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.370472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.370503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.370925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.370957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.371258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.371287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.371717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.371749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.372004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.372033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.372404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.372435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.372865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.372893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.373331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.373363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.373790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.373819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.374068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.374097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.374524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.374555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.374766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.374795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.375220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.375250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.375640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.375677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.376108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.376137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.376554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.376585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.376913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.376942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.377365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.377396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.377822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.377851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.378273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.378303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.378735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.378766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.379182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.769 [2024-06-11 09:44:05.379212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.769 qpair failed and we were unable to recover it. 00:29:33.769 [2024-06-11 09:44:05.379509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.379540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.379964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.379995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.380421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.380452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.380878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.380908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.381338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.381368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.381796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.381826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.382253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.382284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.382701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.382733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.383016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.383047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.383348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.383384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.383841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.383871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.384288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.384331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.384758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.384789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.385223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.385254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.385668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.385698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.385931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.385961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.386233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.386265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.386735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.386766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.387212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.387242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.387630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.387662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.388105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.388136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.388556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.388589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.388989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.389019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.389269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.389297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.389763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.389794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.390224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.390255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.390533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.390563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.390860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.390890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.391281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.391310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.391774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.391805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.392046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.392076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.392505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.392542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.392974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.393004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.393429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.393458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.393913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.393943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.394375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.394406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.394809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.394838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.770 [2024-06-11 09:44:05.395252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.770 [2024-06-11 09:44:05.395282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.770 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.395722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.395752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.396173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.396203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.396662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.396693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.397117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.397145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.397551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.397582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.398000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.398030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.398438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.398470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.398773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.398803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.399233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.399264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.399701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.399733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.400150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.400180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.400429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.400460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.400586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.400613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.401069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.401100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.401576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.401606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.402019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.402048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.402467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.402501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.402918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.402947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.403373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.403406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.403833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.403862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.404294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.404338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.404560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.404590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.404833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.404863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.405186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.405217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.405625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.405656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.405918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.405949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.406373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.406404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.406823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.406852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.407287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.407329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.407733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.407762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.408196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.408227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.408470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.408502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.408913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.408944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.409362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.409400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.409865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.409894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.410187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.410224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.410629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.410659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.411085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.771 [2024-06-11 09:44:05.411115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.771 qpair failed and we were unable to recover it. 00:29:33.771 [2024-06-11 09:44:05.411353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.411383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.411795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.411824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.412263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.412293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.412606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.412636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.413068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.413099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.413534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.413565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.413996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.414025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.414451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.414482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.414717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.414749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.415161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.415192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.415631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.415662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.416076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.416106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.416528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.416560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.416669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.416699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.417073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.417103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.417562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.417593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.418014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.418046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.418416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.418446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.418868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.418897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.419347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.419380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.419818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.419848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.420090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.420120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.420377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.420407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.420849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.420878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.421328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.421359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.421780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.421810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.422237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.422267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.422512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.422543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.422940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.422969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.423436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.423467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.423911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.423940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.424354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.424386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.772 [2024-06-11 09:44:05.424819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.772 [2024-06-11 09:44:05.424848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.772 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.425238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.425268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.425544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.425574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.425988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.426022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.426294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.426335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.426757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.426787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.427037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.427066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.427480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.427511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.427943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.427972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.428350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.428382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.428806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.428836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.429330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.429361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.429777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.429806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.430219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.430248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.430678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.430708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.431207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.431237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.431678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.431709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.432126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.432155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.432581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.432611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.433026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.433056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.433490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.433520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.433958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.433989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.434431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.434462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.434753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.434785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.435170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.435199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.435585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.435616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.436042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.436072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.436183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.436210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.436605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.436636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.437051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.437080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.437521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.437552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.437760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.437789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.438209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.438240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.438364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.438393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.438779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.438809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.439179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.439209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.439649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.439681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.440155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.440184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.440598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.773 [2024-06-11 09:44:05.440629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.773 qpair failed and we were unable to recover it. 00:29:33.773 [2024-06-11 09:44:05.441035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.441064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.441461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.441493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.441909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.441939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.442363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.442393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.442791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.442826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.442987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.443020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.443462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.443493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.443734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.443764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.444182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.444211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.444611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.444641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.445047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.445077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.445523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.445555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.445853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.445883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.446309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.446353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.446781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.446809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.447252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.447281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.447531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.447562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.447993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.448025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.448440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.448472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.448864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.448893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.449328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.449359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.449767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.449798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.450302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.450347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.450801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.450830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.451251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.451282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.451588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.451619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.452115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.452145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.452586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.452622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.453053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.453084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.453339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.453370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.453775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.453804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.454239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.454270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.454420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.454453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.454727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.454756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.455175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.455205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.455612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.455643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.456059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.456088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.456509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.456540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.774 [2024-06-11 09:44:05.456677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.774 [2024-06-11 09:44:05.456709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.774 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.457123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.457154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.457577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.457608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.458039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.458068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.458499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.458531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.458921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.458950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.459378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.459415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.459709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.459739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.460120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.460150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.460578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.460609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.461031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.461060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.461470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.461501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.461914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.461945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.462380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.462410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.462821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.462851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.463275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.463305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.463754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.463784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.464169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.464200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.464494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.464526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.464955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.464984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.465279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.465309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.465718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.465748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.466162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.466192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.466589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.466619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.467039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.467070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.467327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.467358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.467792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.467823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.468247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.468276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.468577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.468608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.468879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.468910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.469336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.469368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.469816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.469846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.470276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.470306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.470750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.470781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.471000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.471029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.471465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.471497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.471911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.471941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.472382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.472412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.775 qpair failed and we were unable to recover it. 00:29:33.775 [2024-06-11 09:44:05.472738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.775 [2024-06-11 09:44:05.472768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.473199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.473228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.473524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.473560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.473824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.473853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.474254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.474284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.474692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.474723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.475134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.475164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.475601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.475632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.476052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.476088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.476555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.476586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.476854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.476884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.477329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.477359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.477782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.477814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.478242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.478273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.478716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.478747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.479177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.479206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.479593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.479625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.480043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.480072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.480492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.480524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.480954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.480984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.481416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.481447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.481914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.481945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.482363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.482393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.482702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.482735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.482973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.483004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.483441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.483475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.483914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.483944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.484391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.484422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.484662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.484692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.485148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.485178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.485620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.485650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.486087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.486117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.486540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.486572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.486999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.487030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.487449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.776 [2024-06-11 09:44:05.487481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.776 qpair failed and we were unable to recover it. 00:29:33.776 [2024-06-11 09:44:05.487927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.487960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.488242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.488274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.488712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.488745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.489181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.489212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.489462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.489492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.489911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.489941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.490375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.490408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.490798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.490828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.491250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.491281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.491529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.491561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.491997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.492027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.492441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.492474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.492791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.492821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.493243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.493281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.493718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.493749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.494187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.494218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.494643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.494676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.495095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.495126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.495571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.495602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.496101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.496131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.496569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.496600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.497010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.497041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.497349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.497382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.497791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.497822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.498247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.498277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.498681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.498712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.499089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.499121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.499543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.499574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.500008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.500039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.500347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.500379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.500844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.500874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.501186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.501217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.501627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.501659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.502070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.502100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.502527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.502561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.502973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.777 [2024-06-11 09:44:05.503003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.777 qpair failed and we were unable to recover it. 00:29:33.777 [2024-06-11 09:44:05.503311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.503365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.503777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.503807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.504249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.504280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.504583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.504618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.505058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.505089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.505394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.505430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.505876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.505907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.506338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.506370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.506807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.506836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.507271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.507300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.507734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.507766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.508187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.508217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.508614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.508646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.509054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.509085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.509518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.509550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.509851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.509882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.510349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.510382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.510803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.510840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.511283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.511313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.511768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.511800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.512227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.512256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.512510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.512540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.513009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.513041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.513459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.513491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.513809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.513842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.514255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.514286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.514719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.514750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.515168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.515202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.515508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.515540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.515976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.516008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.516444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.516474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.516857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.516888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.517309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.517355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.517780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.517812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.518058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.518089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.518497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.518530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.518968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.518999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.519141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.519176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.778 qpair failed and we were unable to recover it. 00:29:33.778 [2024-06-11 09:44:05.519537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.778 [2024-06-11 09:44:05.519568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.519980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.520012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.520444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.520476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.520747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.520776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.521025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.521055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.521282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.521312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.521758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.521788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.522205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.522239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.522629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.522663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.523076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.523106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.523355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.523387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.523529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.523557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.523973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.524005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.524499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.524531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.524958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.524988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.525368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.525399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.525841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.525872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.526286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.526329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.526749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.526782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.527194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.527234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.527706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.527738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.528130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.528161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.528587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.528618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.529028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.529059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.529295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.529338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.529764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.529796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.530228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.530258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.530559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.530590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.530833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.530862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.531195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.531224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.531496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.531528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.531931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.531960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.532377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.532407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.532661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.532690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.779 qpair failed and we were unable to recover it. 00:29:33.779 [2024-06-11 09:44:05.533120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.779 [2024-06-11 09:44:05.533152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.533576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.533607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.533910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.533940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.534361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.534392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.534695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.534728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.535138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.535168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.535401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.535432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.535846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.535876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.535998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.536025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.536443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.536473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.536745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.536775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.537186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.537217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.537613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.537649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.538065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.538095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.538562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.538595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.539012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.539042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.539468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.539500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.539913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.539943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.540218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.540250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.540668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.540699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.541103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.541134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.541536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.541567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.541947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.541976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.542400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.542431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.542846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.542877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.543297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.543340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.543662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.543694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.544101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.544131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.544562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.544595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.544884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.544916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.545375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.545407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.545817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.545846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.546278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.546307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.546775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.546806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.547233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.547264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.547692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.547723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.548106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.548136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.548380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.548411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.780 [2024-06-11 09:44:05.548834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.780 [2024-06-11 09:44:05.548863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.780 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.549115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.549146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.549466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.549497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.549915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.549946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.550347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.550379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.550824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.550854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.551296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.551337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.551692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.551723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.551981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.552011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.552356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.552390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.552808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.552841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.553264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.553294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.553722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.553752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.554177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.554207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.554613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.554652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.554817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.554849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.555291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.555334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.555693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.555723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.555872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.555900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.556341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.556372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.556757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.556787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.556940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.556973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.557429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.557460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.557891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.557921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.558353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.558385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.558878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.558908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.559210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.559241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.559646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.559677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.560114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.781 [2024-06-11 09:44:05.560144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.781 qpair failed and we were unable to recover it. 00:29:33.781 [2024-06-11 09:44:05.560557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.560589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.561021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.561051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.561474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.561505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.561784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.561814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.562236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.562265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.562439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.562470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.562797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.562827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.563204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.563234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.563350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.563378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.563766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.563796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.564043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.564073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.564515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.564547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.564966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.564995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.565431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.565465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.565901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.565931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.566144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.566174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.566453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.566486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.566871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.566900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.567311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.567355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:33.782 [2024-06-11 09:44:05.567747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.782 [2024-06-11 09:44:05.567779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:33.782 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.568240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.568275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.568715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.568746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.569167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.569197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.569608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.569640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.570015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.570046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:34.051 [2024-06-11 09:44:05.570467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.570499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:29:34.051 [2024-06-11 09:44:05.570790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.570821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:34.051 [2024-06-11 09:44:05.570946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.570975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:34.051 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.051 [2024-06-11 09:44:05.571410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.571441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.571866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.571896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.572340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.572373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.572816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.572847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.573256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.573287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.573810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.573841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.574255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.574285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.574723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.574757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.575178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.575208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.575640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.575672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.576092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.576124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.576541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.576572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.577026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.577057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.577330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.577360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.577669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.577702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.578167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.578196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.578616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.578647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.579072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.579107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.579352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.579383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.051 [2024-06-11 09:44:05.579828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.051 [2024-06-11 09:44:05.579858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.051 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.580074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.580103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.580511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.580542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.580949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.580980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.581410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.581441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.581873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.581904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.582278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.582308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.582736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.582766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.583184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.583216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.583510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.583542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.583988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.584019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.584435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.584470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.584645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.584679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.585076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.585109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.585343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.585374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.585838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.585868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.586295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.586344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.586807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.586837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.587075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.587104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.587498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.587529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.587863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.587894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.588309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.588365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.588817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.588847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.589135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.589166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.589607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.589637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.589874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.589903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.590226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.590258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.590727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.590758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.591190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.591221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.591446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.591480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.591927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.591957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.592363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.592394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.592652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.592686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.593100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.593130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.593555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.593586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.593831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.593864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.594265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.594294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.594735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.594766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.052 [2024-06-11 09:44:05.595190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.052 [2024-06-11 09:44:05.595220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.052 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.595668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.595700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.595896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.595926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.596365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.596395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.596837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.596866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.597286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.597342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.597851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.597882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.598299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.598345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.598764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.598794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.599207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.599237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.599638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.599669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.600083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.600116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.600515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.600546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.600959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.600995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.601420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.601451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.601899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.601929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.602374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.602407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.602881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.602914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.603219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.603255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.603681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.603714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.604139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.604168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.604579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.604610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.604889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.604919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.605357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.605388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.605568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.605596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.605976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.606007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.606450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.606480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.606899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.606929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.607372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.607403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.607828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.607859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.608291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.608333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.608567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.608596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.608924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.608954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.609451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.609482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.609860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.609890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.610253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.610283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.610741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.610773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.611197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.611227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.053 qpair failed and we were unable to recover it. 00:29:34.053 [2024-06-11 09:44:05.611480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.053 [2024-06-11 09:44:05.611512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.611922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.611951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.612390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.612422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.612857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.612886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.054 [2024-06-11 09:44:05.613335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.613367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:34.054 [2024-06-11 09:44:05.613537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.613567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.054 [2024-06-11 09:44:05.614021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.614053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b9 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.054 0 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.614468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.614500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.614926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.614956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.615364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.615395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.615521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.615548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.615964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.615993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.616429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.616460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.616922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.616954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.617254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.617283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.617587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.617620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.617922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.617953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.618373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.618404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.618642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.618671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.618908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.618938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.619373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.619405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.619678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.619707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.620154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.620184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.620634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.620664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.621091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.621120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.621549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.621580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.622013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.622042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.622475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.622504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.622879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.622909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.623355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.623387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.623817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.623847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.624265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.624296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.624637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.624667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.625097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.625126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.625574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.625605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.626093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.626123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.054 [2024-06-11 09:44:05.626556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.054 [2024-06-11 09:44:05.626586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.054 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.626971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.627000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.627428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.627460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.627899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.627928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.628330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.628361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.628797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.628826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.629255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.629285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.629711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.629741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.630174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.630205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.630638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.630676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.631038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.631070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.631336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.631367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.631798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.631829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.632241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.632272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.632705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.632737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.633155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.633184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.633583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.633613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.634026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.634055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.634495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.634525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.634933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.634961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.635402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.635433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.635857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.635886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 Malloc0 00:29:34.055 [2024-06-11 09:44:05.636302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.636344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.636782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.636812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.055 [2024-06-11 09:44:05.637117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.637147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:34.055 [2024-06-11 09:44:05.637560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.637591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.055 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.055 [2024-06-11 09:44:05.638029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.638060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.638459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.638491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.638932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.638966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.055 qpair failed and we were unable to recover it. 00:29:34.055 [2024-06-11 09:44:05.639270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.055 [2024-06-11 09:44:05.639304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.639716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.639746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.640154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.640184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.640512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.640542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.640960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.640990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.641427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.641463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.641925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.641954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.642400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.642430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.642590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.642621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.643076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.643105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.643533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.643563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.643648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.056 [2024-06-11 09:44:05.643751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.643781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.644074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.644105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.644543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.644574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.644827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.644856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.645300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.645345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.645825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.645855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.646296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.646337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.646534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.646572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.646988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.647017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.647453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.647484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.647793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.647823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.648250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.648279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.648597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.648627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.648932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.648963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.649378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.649409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.649701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.649730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.649996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.650026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.650293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.650331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.650756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.650786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.651167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.651197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.651625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.651655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.651844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.651874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 [2024-06-11 09:44:05.652348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.652380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.056 [2024-06-11 09:44:05.652807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.652841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:34.056 [2024-06-11 09:44:05.653215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.653245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.056 qpair failed and we were unable to recover it. 00:29:34.056 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.056 [2024-06-11 09:44:05.653516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.056 [2024-06-11 09:44:05.653546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.057 [2024-06-11 09:44:05.653984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.654014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.654431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.654460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.654896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.654925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.655337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.655367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.655678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.655707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.656135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.656164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.656591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.656628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.656995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.657025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.657349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.657380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.657825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.657855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.658287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.658327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.658640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.658669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.659052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.659082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.659390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.659419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.659875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.659904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.660336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.660366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.660647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.660676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.660968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.660996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.661431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.661461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.661858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.661887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.662331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.662361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.662788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.662817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.663247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.663276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.663521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.663551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.663986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.664015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.664334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.664365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.057 [2024-06-11 09:44:05.664824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.664854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.057 [2024-06-11 09:44:05.665254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.665284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.057 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.057 [2024-06-11 09:44:05.665693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.665723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.666082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.666112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.666541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.057 [2024-06-11 09:44:05.666573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.057 qpair failed and we were unable to recover it. 00:29:34.057 [2024-06-11 09:44:05.666834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.666865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.667293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.667333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.667753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.667782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.668168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.668197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.668618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.668648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.669089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.669118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.669406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.669438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.669835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.669864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.670278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.670306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.670617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.670651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.671111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.671140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.671708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.671817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.672028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.672066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.672504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.672537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.672959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.672989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.673419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.673450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.673761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.673790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.674218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.674247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.674675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.674705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.674983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.675012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.675430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.675462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.675866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.675895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.676388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.676419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.676667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.676696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.058 [2024-06-11 09:44:05.677186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.058 [2024-06-11 09:44:05.677216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.677459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.677490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.058 [2024-06-11 09:44:05.677811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.677846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.678155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.678184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.678619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.678651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.679122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.679152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.679578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.679609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.058 qpair failed and we were unable to recover it. 00:29:34.058 [2024-06-11 09:44:05.680048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.058 [2024-06-11 09:44:05.680079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.680499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.059 [2024-06-11 09:44:05.680531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.680958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.059 [2024-06-11 09:44:05.680987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.681404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.059 [2024-06-11 09:44:05.681435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.681860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.059 [2024-06-11 09:44:05.681889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.682138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.059 [2024-06-11 09:44:05.682168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.682464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.059 [2024-06-11 09:44:05.682495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.682920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.059 [2024-06-11 09:44:05.682955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.683382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.059 [2024-06-11 09:44:05.683413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.683840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.059 [2024-06-11 09:44:05.683870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f42a4000b90 with addr=10.0.0.2, port=4420 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.684042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.059 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.059 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:34.059 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.059 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.059 [2024-06-11 09:44:05.694769] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.059 [2024-06-11 09:44:05.694958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.059 [2024-06-11 09:44:05.695016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.059 [2024-06-11 09:44:05.695039] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.059 [2024-06-11 09:44:05.695059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.059 [2024-06-11 09:44:05.695113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.059 09:44:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1331936 00:29:34.059 [2024-06-11 09:44:05.704702] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.059 [2024-06-11 09:44:05.704843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.059 [2024-06-11 09:44:05.704886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.059 [2024-06-11 09:44:05.704904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.059 [2024-06-11 09:44:05.704920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.059 [2024-06-11 09:44:05.704958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.714742] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.059 [2024-06-11 09:44:05.714856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.059 [2024-06-11 09:44:05.714889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.059 [2024-06-11 09:44:05.714900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.059 [2024-06-11 09:44:05.714916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.059 [2024-06-11 09:44:05.714943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.724635] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.059 [2024-06-11 09:44:05.724736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.059 [2024-06-11 09:44:05.724764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.059 [2024-06-11 09:44:05.724774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.059 [2024-06-11 09:44:05.724782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.059 [2024-06-11 09:44:05.724803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.734669] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.059 [2024-06-11 09:44:05.734774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.059 [2024-06-11 09:44:05.734801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.059 [2024-06-11 09:44:05.734809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.059 [2024-06-11 09:44:05.734817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.059 [2024-06-11 09:44:05.734837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.744707] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.059 [2024-06-11 09:44:05.744789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.059 [2024-06-11 09:44:05.744814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.059 [2024-06-11 09:44:05.744823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.059 [2024-06-11 09:44:05.744831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.059 [2024-06-11 09:44:05.744850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.754760] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.059 [2024-06-11 09:44:05.754850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.059 [2024-06-11 09:44:05.754877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.059 [2024-06-11 09:44:05.754886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.059 [2024-06-11 09:44:05.754893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.059 [2024-06-11 09:44:05.754914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.764747] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.059 [2024-06-11 09:44:05.764829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.059 [2024-06-11 09:44:05.764860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.059 [2024-06-11 09:44:05.764870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.059 [2024-06-11 09:44:05.764876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.059 [2024-06-11 09:44:05.764898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.774776] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.059 [2024-06-11 09:44:05.774880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.059 [2024-06-11 09:44:05.774905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.059 [2024-06-11 09:44:05.774913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.059 [2024-06-11 09:44:05.774920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.059 [2024-06-11 09:44:05.774939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.059 qpair failed and we were unable to recover it. 00:29:34.059 [2024-06-11 09:44:05.784786] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.060 [2024-06-11 09:44:05.784870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.060 [2024-06-11 09:44:05.784901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.060 [2024-06-11 09:44:05.784909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.060 [2024-06-11 09:44:05.784916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.060 [2024-06-11 09:44:05.784935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.060 qpair failed and we were unable to recover it. 00:29:34.060 [2024-06-11 09:44:05.794801] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.060 [2024-06-11 09:44:05.794874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.060 [2024-06-11 09:44:05.794900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.060 [2024-06-11 09:44:05.794908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.060 [2024-06-11 09:44:05.794915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.060 [2024-06-11 09:44:05.794934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.060 qpair failed and we were unable to recover it. 00:29:34.060 [2024-06-11 09:44:05.804870] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.060 [2024-06-11 09:44:05.804960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.060 [2024-06-11 09:44:05.804999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.060 [2024-06-11 09:44:05.805017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.060 [2024-06-11 09:44:05.805025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.060 [2024-06-11 09:44:05.805050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.060 qpair failed and we were unable to recover it. 00:29:34.060 [2024-06-11 09:44:05.814935] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.060 [2024-06-11 09:44:05.815032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.060 [2024-06-11 09:44:05.815072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.060 [2024-06-11 09:44:05.815082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.060 [2024-06-11 09:44:05.815089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.060 [2024-06-11 09:44:05.815114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.060 qpair failed and we were unable to recover it. 00:29:34.060 [2024-06-11 09:44:05.825030] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.060 [2024-06-11 09:44:05.825125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.060 [2024-06-11 09:44:05.825164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.060 [2024-06-11 09:44:05.825175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.060 [2024-06-11 09:44:05.825182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.060 [2024-06-11 09:44:05.825207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.060 qpair failed and we were unable to recover it. 00:29:34.060 [2024-06-11 09:44:05.834968] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.060 [2024-06-11 09:44:05.835057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.060 [2024-06-11 09:44:05.835084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.060 [2024-06-11 09:44:05.835093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.060 [2024-06-11 09:44:05.835100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.060 [2024-06-11 09:44:05.835120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.060 qpair failed and we were unable to recover it. 00:29:34.060 [2024-06-11 09:44:05.844904] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.060 [2024-06-11 09:44:05.844984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.060 [2024-06-11 09:44:05.845012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.060 [2024-06-11 09:44:05.845022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.060 [2024-06-11 09:44:05.845028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.060 [2024-06-11 09:44:05.845050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.060 qpair failed and we were unable to recover it. 00:29:34.060 [2024-06-11 09:44:05.854935] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.060 [2024-06-11 09:44:05.855027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.060 [2024-06-11 09:44:05.855056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.060 [2024-06-11 09:44:05.855066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.060 [2024-06-11 09:44:05.855073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.060 [2024-06-11 09:44:05.855094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.060 qpair failed and we were unable to recover it. 00:29:34.346 [2024-06-11 09:44:05.865067] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.346 [2024-06-11 09:44:05.865152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.346 [2024-06-11 09:44:05.865178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.346 [2024-06-11 09:44:05.865188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.346 [2024-06-11 09:44:05.865195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.346 [2024-06-11 09:44:05.865214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.346 qpair failed and we were unable to recover it. 00:29:34.346 [2024-06-11 09:44:05.875134] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.346 [2024-06-11 09:44:05.875224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.346 [2024-06-11 09:44:05.875248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.346 [2024-06-11 09:44:05.875257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.346 [2024-06-11 09:44:05.875265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.346 [2024-06-11 09:44:05.875284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.346 qpair failed and we were unable to recover it. 00:29:34.346 [2024-06-11 09:44:05.885092] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.346 [2024-06-11 09:44:05.885183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.346 [2024-06-11 09:44:05.885209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.346 [2024-06-11 09:44:05.885217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.346 [2024-06-11 09:44:05.885224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.346 [2024-06-11 09:44:05.885244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.346 qpair failed and we were unable to recover it. 00:29:34.346 [2024-06-11 09:44:05.895147] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.346 [2024-06-11 09:44:05.895241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.346 [2024-06-11 09:44:05.895266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.346 [2024-06-11 09:44:05.895282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.346 [2024-06-11 09:44:05.895289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.346 [2024-06-11 09:44:05.895308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.346 qpair failed and we were unable to recover it. 00:29:34.346 [2024-06-11 09:44:05.905063] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.346 [2024-06-11 09:44:05.905153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.346 [2024-06-11 09:44:05.905179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.346 [2024-06-11 09:44:05.905187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.346 [2024-06-11 09:44:05.905194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.346 [2024-06-11 09:44:05.905213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.346 qpair failed and we were unable to recover it. 00:29:34.346 [2024-06-11 09:44:05.915091] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.346 [2024-06-11 09:44:05.915169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.346 [2024-06-11 09:44:05.915194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.346 [2024-06-11 09:44:05.915204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.346 [2024-06-11 09:44:05.915211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.346 [2024-06-11 09:44:05.915231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.346 qpair failed and we were unable to recover it. 00:29:34.346 [2024-06-11 09:44:05.925202] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.346 [2024-06-11 09:44:05.925282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.346 [2024-06-11 09:44:05.925307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.346 [2024-06-11 09:44:05.925328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.346 [2024-06-11 09:44:05.925336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.346 [2024-06-11 09:44:05.925357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.346 qpair failed and we were unable to recover it. 00:29:34.346 [2024-06-11 09:44:05.935305] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.346 [2024-06-11 09:44:05.935399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.346 [2024-06-11 09:44:05.935424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.346 [2024-06-11 09:44:05.935433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.346 [2024-06-11 09:44:05.935440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.346 [2024-06-11 09:44:05.935459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.346 qpair failed and we were unable to recover it. 00:29:34.346 [2024-06-11 09:44:05.945283] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.346 [2024-06-11 09:44:05.945375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.346 [2024-06-11 09:44:05.945401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.346 [2024-06-11 09:44:05.945410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.346 [2024-06-11 09:44:05.945417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.346 [2024-06-11 09:44:05.945436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.346 qpair failed and we were unable to recover it. 00:29:34.346 [2024-06-11 09:44:05.955343] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.346 [2024-06-11 09:44:05.955483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.346 [2024-06-11 09:44:05.955510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.346 [2024-06-11 09:44:05.955520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.346 [2024-06-11 09:44:05.955527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.346 [2024-06-11 09:44:05.955547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.346 qpair failed and we were unable to recover it. 00:29:34.346 [2024-06-11 09:44:05.965432] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.346 [2024-06-11 09:44:05.965533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.346 [2024-06-11 09:44:05.965558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.346 [2024-06-11 09:44:05.965567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.346 [2024-06-11 09:44:05.965574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.347 [2024-06-11 09:44:05.965594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.347 qpair failed and we were unable to recover it. 00:29:34.347 [2024-06-11 09:44:05.975475] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.347 [2024-06-11 09:44:05.975594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.347 [2024-06-11 09:44:05.975619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.347 [2024-06-11 09:44:05.975628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.347 [2024-06-11 09:44:05.975635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.347 [2024-06-11 09:44:05.975653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.347 qpair failed and we were unable to recover it. 00:29:34.347 [2024-06-11 09:44:05.985472] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.347 [2024-06-11 09:44:05.985623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.347 [2024-06-11 09:44:05.985657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.347 [2024-06-11 09:44:05.985665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.347 [2024-06-11 09:44:05.985673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.347 [2024-06-11 09:44:05.985693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.347 qpair failed and we were unable to recover it. 00:29:34.347 [2024-06-11 09:44:05.995517] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.347 [2024-06-11 09:44:05.995605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.347 [2024-06-11 09:44:05.995631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.347 [2024-06-11 09:44:05.995640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.347 [2024-06-11 09:44:05.995647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.347 [2024-06-11 09:44:05.995666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.347 qpair failed and we were unable to recover it. 00:29:34.347 [2024-06-11 09:44:06.005499] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.347 [2024-06-11 09:44:06.005589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.347 [2024-06-11 09:44:06.005615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.347 [2024-06-11 09:44:06.005625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.347 [2024-06-11 09:44:06.005632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.347 [2024-06-11 09:44:06.005651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.347 qpair failed and we were unable to recover it. 00:29:34.347 [2024-06-11 09:44:06.015473] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.347 [2024-06-11 09:44:06.015561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.347 [2024-06-11 09:44:06.015586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.347 [2024-06-11 09:44:06.015596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.347 [2024-06-11 09:44:06.015603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.347 [2024-06-11 09:44:06.015621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.347 qpair failed and we were unable to recover it. 00:29:34.347 [2024-06-11 09:44:06.025598] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.347 [2024-06-11 09:44:06.025687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.347 [2024-06-11 09:44:06.025712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.347 [2024-06-11 09:44:06.025721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.347 [2024-06-11 09:44:06.025728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.347 [2024-06-11 09:44:06.025754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.347 qpair failed and we were unable to recover it. 00:29:34.347 [2024-06-11 09:44:06.035461] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.347 [2024-06-11 09:44:06.035544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.347 [2024-06-11 09:44:06.035572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.347 [2024-06-11 09:44:06.035581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.347 [2024-06-11 09:44:06.035588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.347 [2024-06-11 09:44:06.035608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.347 qpair failed and we were unable to recover it. 00:29:34.347 [2024-06-11 09:44:06.045622] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.347 [2024-06-11 09:44:06.045702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.347 [2024-06-11 09:44:06.045730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.347 [2024-06-11 09:44:06.045740] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.347 [2024-06-11 09:44:06.045747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.347 [2024-06-11 09:44:06.045767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.347 qpair failed and we were unable to recover it. 00:29:34.347 [2024-06-11 09:44:06.055595] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.347 [2024-06-11 09:44:06.055696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.347 [2024-06-11 09:44:06.055724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.347 [2024-06-11 09:44:06.055733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.347 [2024-06-11 09:44:06.055740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.347 [2024-06-11 09:44:06.055760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.347 qpair failed and we were unable to recover it. 00:29:34.347 [2024-06-11 09:44:06.065644] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.347 [2024-06-11 09:44:06.065740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.347 [2024-06-11 09:44:06.065766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.347 [2024-06-11 09:44:06.065775] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.347 [2024-06-11 09:44:06.065782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.347 [2024-06-11 09:44:06.065802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.347 qpair failed and we were unable to recover it. 00:29:34.347 [2024-06-11 09:44:06.075672] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.347 [2024-06-11 09:44:06.075756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.347 [2024-06-11 09:44:06.075789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.347 [2024-06-11 09:44:06.075797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.347 [2024-06-11 09:44:06.075805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.347 [2024-06-11 09:44:06.075824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.347 qpair failed and we were unable to recover it. 00:29:34.347 [2024-06-11 09:44:06.085726] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.347 [2024-06-11 09:44:06.085804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.347 [2024-06-11 09:44:06.085829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.347 [2024-06-11 09:44:06.085839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.347 [2024-06-11 09:44:06.085846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.347 [2024-06-11 09:44:06.085864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.347 qpair failed and we were unable to recover it. 00:29:34.347 [2024-06-11 09:44:06.095839] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.347 [2024-06-11 09:44:06.095950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.347 [2024-06-11 09:44:06.095978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.347 [2024-06-11 09:44:06.095987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.347 [2024-06-11 09:44:06.095994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.347 [2024-06-11 09:44:06.096017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.347 qpair failed and we were unable to recover it. 00:29:34.347 [2024-06-11 09:44:06.105781] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.348 [2024-06-11 09:44:06.105868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.348 [2024-06-11 09:44:06.105895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.348 [2024-06-11 09:44:06.105905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.348 [2024-06-11 09:44:06.105911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.348 [2024-06-11 09:44:06.105930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.348 qpair failed and we were unable to recover it. 00:29:34.348 [2024-06-11 09:44:06.115794] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.348 [2024-06-11 09:44:06.115878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.348 [2024-06-11 09:44:06.115908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.348 [2024-06-11 09:44:06.115917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.348 [2024-06-11 09:44:06.115934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.348 [2024-06-11 09:44:06.115955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.348 qpair failed and we were unable to recover it. 00:29:34.348 [2024-06-11 09:44:06.125841] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.348 [2024-06-11 09:44:06.125926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.348 [2024-06-11 09:44:06.125951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.348 [2024-06-11 09:44:06.125961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.348 [2024-06-11 09:44:06.125968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.348 [2024-06-11 09:44:06.125987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.348 qpair failed and we were unable to recover it. 00:29:34.348 [2024-06-11 09:44:06.135865] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.348 [2024-06-11 09:44:06.135965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.348 [2024-06-11 09:44:06.135996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.348 [2024-06-11 09:44:06.136004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.348 [2024-06-11 09:44:06.136011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.348 [2024-06-11 09:44:06.136032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.348 qpair failed and we were unable to recover it. 00:29:34.348 [2024-06-11 09:44:06.145774] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.348 [2024-06-11 09:44:06.145850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.348 [2024-06-11 09:44:06.145875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.348 [2024-06-11 09:44:06.145883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.348 [2024-06-11 09:44:06.145891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.348 [2024-06-11 09:44:06.145910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.348 qpair failed and we were unable to recover it. 00:29:34.348 [2024-06-11 09:44:06.156020] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.348 [2024-06-11 09:44:06.156148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.348 [2024-06-11 09:44:06.156175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.348 [2024-06-11 09:44:06.156185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.348 [2024-06-11 09:44:06.156192] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.348 [2024-06-11 09:44:06.156211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.348 qpair failed and we were unable to recover it. 00:29:34.611 [2024-06-11 09:44:06.165967] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.611 [2024-06-11 09:44:06.166065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.611 [2024-06-11 09:44:06.166106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.611 [2024-06-11 09:44:06.166116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.611 [2024-06-11 09:44:06.166123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.611 [2024-06-11 09:44:06.166148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.611 qpair failed and we were unable to recover it. 00:29:34.611 [2024-06-11 09:44:06.175989] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.611 [2024-06-11 09:44:06.176096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.611 [2024-06-11 09:44:06.176137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.611 [2024-06-11 09:44:06.176148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.611 [2024-06-11 09:44:06.176155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.611 [2024-06-11 09:44:06.176180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.611 qpair failed and we were unable to recover it. 00:29:34.611 [2024-06-11 09:44:06.186033] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.611 [2024-06-11 09:44:06.186117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.611 [2024-06-11 09:44:06.186146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.611 [2024-06-11 09:44:06.186156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.611 [2024-06-11 09:44:06.186163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.611 [2024-06-11 09:44:06.186183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.611 qpair failed and we were unable to recover it. 00:29:34.611 [2024-06-11 09:44:06.196001] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.611 [2024-06-11 09:44:06.196116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.611 [2024-06-11 09:44:06.196142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.611 [2024-06-11 09:44:06.196151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.611 [2024-06-11 09:44:06.196158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.611 [2024-06-11 09:44:06.196177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.611 qpair failed and we were unable to recover it. 00:29:34.611 [2024-06-11 09:44:06.206102] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.611 [2024-06-11 09:44:06.206189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.611 [2024-06-11 09:44:06.206222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.611 [2024-06-11 09:44:06.206238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.611 [2024-06-11 09:44:06.206246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.611 [2024-06-11 09:44:06.206267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.611 qpair failed and we were unable to recover it. 00:29:34.611 [2024-06-11 09:44:06.216248] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.611 [2024-06-11 09:44:06.216379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.611 [2024-06-11 09:44:06.216407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.611 [2024-06-11 09:44:06.216415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.611 [2024-06-11 09:44:06.216422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.611 [2024-06-11 09:44:06.216443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.611 qpair failed and we were unable to recover it. 00:29:34.611 [2024-06-11 09:44:06.226184] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.611 [2024-06-11 09:44:06.226266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.611 [2024-06-11 09:44:06.226292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.611 [2024-06-11 09:44:06.226301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.611 [2024-06-11 09:44:06.226308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.611 [2024-06-11 09:44:06.226336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.611 qpair failed and we were unable to recover it. 00:29:34.611 [2024-06-11 09:44:06.236087] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.611 [2024-06-11 09:44:06.236174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.611 [2024-06-11 09:44:06.236198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.611 [2024-06-11 09:44:06.236207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.611 [2024-06-11 09:44:06.236213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.611 [2024-06-11 09:44:06.236232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.611 qpair failed and we were unable to recover it. 00:29:34.611 [2024-06-11 09:44:06.246229] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.611 [2024-06-11 09:44:06.246325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.611 [2024-06-11 09:44:06.246351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.611 [2024-06-11 09:44:06.246360] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.611 [2024-06-11 09:44:06.246367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.611 [2024-06-11 09:44:06.246386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.611 qpair failed and we were unable to recover it. 00:29:34.611 [2024-06-11 09:44:06.256259] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.611 [2024-06-11 09:44:06.256356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.611 [2024-06-11 09:44:06.256388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.611 [2024-06-11 09:44:06.256397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.611 [2024-06-11 09:44:06.256405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.611 [2024-06-11 09:44:06.256425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.611 qpair failed and we were unable to recover it. 00:29:34.611 [2024-06-11 09:44:06.266260] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.611 [2024-06-11 09:44:06.266363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.611 [2024-06-11 09:44:06.266390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.611 [2024-06-11 09:44:06.266398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.611 [2024-06-11 09:44:06.266405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.612 [2024-06-11 09:44:06.266425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.612 qpair failed and we were unable to recover it. 00:29:34.612 [2024-06-11 09:44:06.276426] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.612 [2024-06-11 09:44:06.276516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.612 [2024-06-11 09:44:06.276541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.612 [2024-06-11 09:44:06.276549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.612 [2024-06-11 09:44:06.276557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.612 [2024-06-11 09:44:06.276576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.612 qpair failed and we were unable to recover it. 00:29:34.612 [2024-06-11 09:44:06.286355] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.612 [2024-06-11 09:44:06.286438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.612 [2024-06-11 09:44:06.286462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.612 [2024-06-11 09:44:06.286471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.612 [2024-06-11 09:44:06.286478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.612 [2024-06-11 09:44:06.286498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.612 qpair failed and we were unable to recover it. 00:29:34.612 [2024-06-11 09:44:06.296390] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.612 [2024-06-11 09:44:06.296534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.612 [2024-06-11 09:44:06.296560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.612 [2024-06-11 09:44:06.296574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.612 [2024-06-11 09:44:06.296581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.612 [2024-06-11 09:44:06.296601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.612 qpair failed and we were unable to recover it. 00:29:34.612 [2024-06-11 09:44:06.306407] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.612 [2024-06-11 09:44:06.306498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.612 [2024-06-11 09:44:06.306525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.612 [2024-06-11 09:44:06.306534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.612 [2024-06-11 09:44:06.306541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.612 [2024-06-11 09:44:06.306561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.612 qpair failed and we were unable to recover it. 00:29:34.612 [2024-06-11 09:44:06.316475] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.612 [2024-06-11 09:44:06.316548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.612 [2024-06-11 09:44:06.316573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.612 [2024-06-11 09:44:06.316582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.612 [2024-06-11 09:44:06.316590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.612 [2024-06-11 09:44:06.316609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.612 qpair failed and we were unable to recover it. 00:29:34.612 [2024-06-11 09:44:06.326505] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.612 [2024-06-11 09:44:06.326588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.612 [2024-06-11 09:44:06.326613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.612 [2024-06-11 09:44:06.326623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.612 [2024-06-11 09:44:06.326629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.612 [2024-06-11 09:44:06.326649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.612 qpair failed and we were unable to recover it. 00:29:34.612 [2024-06-11 09:44:06.336532] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.612 [2024-06-11 09:44:06.336623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.612 [2024-06-11 09:44:06.336648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.612 [2024-06-11 09:44:06.336657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.612 [2024-06-11 09:44:06.336664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.612 [2024-06-11 09:44:06.336683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.612 qpair failed and we were unable to recover it. 00:29:34.612 [2024-06-11 09:44:06.346568] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.612 [2024-06-11 09:44:06.346665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.612 [2024-06-11 09:44:06.346691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.612 [2024-06-11 09:44:06.346699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.612 [2024-06-11 09:44:06.346706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.612 [2024-06-11 09:44:06.346723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.612 qpair failed and we were unable to recover it. 00:29:34.612 [2024-06-11 09:44:06.356554] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.612 [2024-06-11 09:44:06.356640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.612 [2024-06-11 09:44:06.356666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.612 [2024-06-11 09:44:06.356674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.612 [2024-06-11 09:44:06.356681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.612 [2024-06-11 09:44:06.356702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.612 qpair failed and we were unable to recover it. 00:29:34.612 [2024-06-11 09:44:06.366647] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.612 [2024-06-11 09:44:06.366745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.612 [2024-06-11 09:44:06.366771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.612 [2024-06-11 09:44:06.366780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.612 [2024-06-11 09:44:06.366787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.612 [2024-06-11 09:44:06.366805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.612 qpair failed and we were unable to recover it. 00:29:34.612 [2024-06-11 09:44:06.376625] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.612 [2024-06-11 09:44:06.376710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.612 [2024-06-11 09:44:06.376735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.612 [2024-06-11 09:44:06.376743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.612 [2024-06-11 09:44:06.376751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.612 [2024-06-11 09:44:06.376771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.612 qpair failed and we were unable to recover it. 00:29:34.612 [2024-06-11 09:44:06.386730] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.612 [2024-06-11 09:44:06.386822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.612 [2024-06-11 09:44:06.386853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.612 [2024-06-11 09:44:06.386863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.612 [2024-06-11 09:44:06.386870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.612 [2024-06-11 09:44:06.386890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.612 qpair failed and we were unable to recover it. 00:29:34.612 [2024-06-11 09:44:06.396719] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.612 [2024-06-11 09:44:06.396805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.612 [2024-06-11 09:44:06.396830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.612 [2024-06-11 09:44:06.396839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.613 [2024-06-11 09:44:06.396846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.613 [2024-06-11 09:44:06.396864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.613 qpair failed and we were unable to recover it. 00:29:34.613 [2024-06-11 09:44:06.406803] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.613 [2024-06-11 09:44:06.406890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.613 [2024-06-11 09:44:06.406917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.613 [2024-06-11 09:44:06.406927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.613 [2024-06-11 09:44:06.406933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.613 [2024-06-11 09:44:06.406952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.613 qpair failed and we were unable to recover it. 00:29:34.613 [2024-06-11 09:44:06.416736] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.613 [2024-06-11 09:44:06.416833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.613 [2024-06-11 09:44:06.416860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.613 [2024-06-11 09:44:06.416868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.613 [2024-06-11 09:44:06.416875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.613 [2024-06-11 09:44:06.416895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.613 qpair failed and we were unable to recover it. 00:29:34.876 [2024-06-11 09:44:06.426799] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.876 [2024-06-11 09:44:06.426885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.876 [2024-06-11 09:44:06.426911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.876 [2024-06-11 09:44:06.426921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.876 [2024-06-11 09:44:06.426928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.876 [2024-06-11 09:44:06.426954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.876 qpair failed and we were unable to recover it. 00:29:34.876 [2024-06-11 09:44:06.436834] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.876 [2024-06-11 09:44:06.436918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.876 [2024-06-11 09:44:06.436943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.876 [2024-06-11 09:44:06.436953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.876 [2024-06-11 09:44:06.436959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.876 [2024-06-11 09:44:06.436978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.876 qpair failed and we were unable to recover it. 00:29:34.876 [2024-06-11 09:44:06.446892] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.876 [2024-06-11 09:44:06.447001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.876 [2024-06-11 09:44:06.447040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.876 [2024-06-11 09:44:06.447050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.876 [2024-06-11 09:44:06.447057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.876 [2024-06-11 09:44:06.447082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.876 qpair failed and we were unable to recover it. 00:29:34.876 [2024-06-11 09:44:06.456920] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.876 [2024-06-11 09:44:06.457017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.876 [2024-06-11 09:44:06.457057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.876 [2024-06-11 09:44:06.457069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.876 [2024-06-11 09:44:06.457076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.876 [2024-06-11 09:44:06.457101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.876 qpair failed and we were unable to recover it. 00:29:34.876 [2024-06-11 09:44:06.466898] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.876 [2024-06-11 09:44:06.466988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.876 [2024-06-11 09:44:06.467015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.876 [2024-06-11 09:44:06.467024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.876 [2024-06-11 09:44:06.467031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.876 [2024-06-11 09:44:06.467051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.876 qpair failed and we were unable to recover it. 00:29:34.876 [2024-06-11 09:44:06.476950] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.876 [2024-06-11 09:44:06.477035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.876 [2024-06-11 09:44:06.477081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.876 [2024-06-11 09:44:06.477092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.876 [2024-06-11 09:44:06.477100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.876 [2024-06-11 09:44:06.477124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.876 qpair failed and we were unable to recover it. 00:29:34.876 [2024-06-11 09:44:06.487001] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.876 [2024-06-11 09:44:06.487083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.876 [2024-06-11 09:44:06.487111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.876 [2024-06-11 09:44:06.487121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.876 [2024-06-11 09:44:06.487128] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.876 [2024-06-11 09:44:06.487149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.876 qpair failed and we were unable to recover it. 00:29:34.876 [2024-06-11 09:44:06.496991] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.876 [2024-06-11 09:44:06.497152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.876 [2024-06-11 09:44:06.497178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.876 [2024-06-11 09:44:06.497187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.876 [2024-06-11 09:44:06.497194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.876 [2024-06-11 09:44:06.497216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.876 qpair failed and we were unable to recover it. 00:29:34.876 [2024-06-11 09:44:06.507027] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.876 [2024-06-11 09:44:06.507183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.876 [2024-06-11 09:44:06.507214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.876 [2024-06-11 09:44:06.507223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.876 [2024-06-11 09:44:06.507232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.876 [2024-06-11 09:44:06.507252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.876 qpair failed and we were unable to recover it. 00:29:34.876 [2024-06-11 09:44:06.517072] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.876 [2024-06-11 09:44:06.517153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.876 [2024-06-11 09:44:06.517178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.876 [2024-06-11 09:44:06.517188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.876 [2024-06-11 09:44:06.517202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.876 [2024-06-11 09:44:06.517222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.876 qpair failed and we were unable to recover it. 00:29:34.876 [2024-06-11 09:44:06.527177] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.876 [2024-06-11 09:44:06.527258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.876 [2024-06-11 09:44:06.527283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.876 [2024-06-11 09:44:06.527293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.876 [2024-06-11 09:44:06.527299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.876 [2024-06-11 09:44:06.527325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.876 qpair failed and we were unable to recover it. 00:29:34.876 [2024-06-11 09:44:06.537171] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.876 [2024-06-11 09:44:06.537255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.876 [2024-06-11 09:44:06.537280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.876 [2024-06-11 09:44:06.537289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.876 [2024-06-11 09:44:06.537296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.876 [2024-06-11 09:44:06.537324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.876 qpair failed and we were unable to recover it. 00:29:34.876 [2024-06-11 09:44:06.547158] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.876 [2024-06-11 09:44:06.547250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.876 [2024-06-11 09:44:06.547274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.877 [2024-06-11 09:44:06.547282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.877 [2024-06-11 09:44:06.547289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.877 [2024-06-11 09:44:06.547308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.877 qpair failed and we were unable to recover it. 00:29:34.877 [2024-06-11 09:44:06.557100] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.877 [2024-06-11 09:44:06.557183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.877 [2024-06-11 09:44:06.557208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.877 [2024-06-11 09:44:06.557218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.877 [2024-06-11 09:44:06.557225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.877 [2024-06-11 09:44:06.557245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.877 qpair failed and we were unable to recover it. 00:29:34.877 [2024-06-11 09:44:06.567211] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.877 [2024-06-11 09:44:06.567302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.877 [2024-06-11 09:44:06.567338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.877 [2024-06-11 09:44:06.567349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.877 [2024-06-11 09:44:06.567356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.877 [2024-06-11 09:44:06.567377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.877 qpair failed and we were unable to recover it. 00:29:34.877 [2024-06-11 09:44:06.577256] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.877 [2024-06-11 09:44:06.577348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.877 [2024-06-11 09:44:06.577376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.877 [2024-06-11 09:44:06.577386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.877 [2024-06-11 09:44:06.577393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.877 [2024-06-11 09:44:06.577413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.877 qpair failed and we were unable to recover it. 00:29:34.877 [2024-06-11 09:44:06.587288] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.877 [2024-06-11 09:44:06.587375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.877 [2024-06-11 09:44:06.587401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.877 [2024-06-11 09:44:06.587410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.877 [2024-06-11 09:44:06.587417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.877 [2024-06-11 09:44:06.587435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.877 qpair failed and we were unable to recover it. 00:29:34.877 [2024-06-11 09:44:06.597370] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.877 [2024-06-11 09:44:06.597534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.877 [2024-06-11 09:44:06.597561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.877 [2024-06-11 09:44:06.597569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.877 [2024-06-11 09:44:06.597576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.877 [2024-06-11 09:44:06.597596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.877 qpair failed and we were unable to recover it. 00:29:34.877 [2024-06-11 09:44:06.607361] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.877 [2024-06-11 09:44:06.607453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.877 [2024-06-11 09:44:06.607479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.877 [2024-06-11 09:44:06.607488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.877 [2024-06-11 09:44:06.607503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.877 [2024-06-11 09:44:06.607524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.877 qpair failed and we were unable to recover it. 00:29:34.877 [2024-06-11 09:44:06.617392] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.877 [2024-06-11 09:44:06.617481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.877 [2024-06-11 09:44:06.617506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.877 [2024-06-11 09:44:06.617515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.877 [2024-06-11 09:44:06.617523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.877 [2024-06-11 09:44:06.617542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.877 qpair failed and we were unable to recover it. 00:29:34.877 [2024-06-11 09:44:06.627425] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.877 [2024-06-11 09:44:06.627500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.877 [2024-06-11 09:44:06.627526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.877 [2024-06-11 09:44:06.627534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.877 [2024-06-11 09:44:06.627543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.877 [2024-06-11 09:44:06.627562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.877 qpair failed and we were unable to recover it. 00:29:34.877 [2024-06-11 09:44:06.637423] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.877 [2024-06-11 09:44:06.637583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.877 [2024-06-11 09:44:06.637610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.877 [2024-06-11 09:44:06.637619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.877 [2024-06-11 09:44:06.637626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.877 [2024-06-11 09:44:06.637647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.877 qpair failed and we were unable to recover it. 00:29:34.877 [2024-06-11 09:44:06.647533] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.877 [2024-06-11 09:44:06.647632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.877 [2024-06-11 09:44:06.647657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.877 [2024-06-11 09:44:06.647666] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.877 [2024-06-11 09:44:06.647675] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.877 [2024-06-11 09:44:06.647694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.877 qpair failed and we were unable to recover it. 00:29:34.877 [2024-06-11 09:44:06.657518] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.877 [2024-06-11 09:44:06.657613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.877 [2024-06-11 09:44:06.657642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.877 [2024-06-11 09:44:06.657651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.877 [2024-06-11 09:44:06.657659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.877 [2024-06-11 09:44:06.657679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.877 qpair failed and we were unable to recover it. 00:29:34.877 [2024-06-11 09:44:06.667522] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.877 [2024-06-11 09:44:06.667653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.877 [2024-06-11 09:44:06.667679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.877 [2024-06-11 09:44:06.667689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.877 [2024-06-11 09:44:06.667696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.877 [2024-06-11 09:44:06.667716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.877 qpair failed and we were unable to recover it. 00:29:34.877 [2024-06-11 09:44:06.677569] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.877 [2024-06-11 09:44:06.677656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.877 [2024-06-11 09:44:06.677681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.877 [2024-06-11 09:44:06.677691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.877 [2024-06-11 09:44:06.677700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.878 [2024-06-11 09:44:06.677719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.878 qpair failed and we were unable to recover it. 00:29:34.878 [2024-06-11 09:44:06.687594] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.878 [2024-06-11 09:44:06.687716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.878 [2024-06-11 09:44:06.687744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.878 [2024-06-11 09:44:06.687755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.878 [2024-06-11 09:44:06.687762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:34.878 [2024-06-11 09:44:06.687780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:34.878 qpair failed and we were unable to recover it. 00:29:35.140 [2024-06-11 09:44:06.697664] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.140 [2024-06-11 09:44:06.697806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.140 [2024-06-11 09:44:06.697832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.140 [2024-06-11 09:44:06.697847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.140 [2024-06-11 09:44:06.697854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.140 [2024-06-11 09:44:06.697874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.140 qpair failed and we were unable to recover it. 00:29:35.140 [2024-06-11 09:44:06.707672] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.140 [2024-06-11 09:44:06.707761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.140 [2024-06-11 09:44:06.707789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.140 [2024-06-11 09:44:06.707798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.140 [2024-06-11 09:44:06.707806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.140 [2024-06-11 09:44:06.707825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.140 qpair failed and we were unable to recover it. 00:29:35.140 [2024-06-11 09:44:06.717675] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.140 [2024-06-11 09:44:06.717772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.140 [2024-06-11 09:44:06.717798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.140 [2024-06-11 09:44:06.717807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.140 [2024-06-11 09:44:06.717815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.140 [2024-06-11 09:44:06.717833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.140 qpair failed and we were unable to recover it. 00:29:35.140 [2024-06-11 09:44:06.727746] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.140 [2024-06-11 09:44:06.727823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.140 [2024-06-11 09:44:06.727849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.140 [2024-06-11 09:44:06.727858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.140 [2024-06-11 09:44:06.727866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.140 [2024-06-11 09:44:06.727885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.140 qpair failed and we were unable to recover it. 00:29:35.140 [2024-06-11 09:44:06.737738] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.140 [2024-06-11 09:44:06.737821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.140 [2024-06-11 09:44:06.737846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.140 [2024-06-11 09:44:06.737854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.140 [2024-06-11 09:44:06.737863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.140 [2024-06-11 09:44:06.737882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.140 qpair failed and we were unable to recover it. 00:29:35.140 [2024-06-11 09:44:06.747756] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.140 [2024-06-11 09:44:06.747838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.140 [2024-06-11 09:44:06.747866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.140 [2024-06-11 09:44:06.747877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.140 [2024-06-11 09:44:06.747884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.140 [2024-06-11 09:44:06.747904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.140 qpair failed and we were unable to recover it. 00:29:35.140 [2024-06-11 09:44:06.757831] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.140 [2024-06-11 09:44:06.757919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.140 [2024-06-11 09:44:06.757944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.140 [2024-06-11 09:44:06.757956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.140 [2024-06-11 09:44:06.757965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.140 [2024-06-11 09:44:06.757984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.140 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.767767] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.767852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.767876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.767885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.767893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.767911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.777890] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.777979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.778003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.778013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.778019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.778039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.787932] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.788030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.788062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.788071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.788078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.788097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.797808] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.797888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.797913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.797922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.797930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.797949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.807942] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.808040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.808068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.808076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.808083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.808102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.818001] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.818107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.818148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.818158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.818166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.818191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.828092] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.828184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.828225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.828236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.828244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.828277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.838061] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.838155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.838182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.838192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.838199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.838221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.848071] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.848235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.848265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.848273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.848281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.848302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.858091] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.858188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.858216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.858224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.858233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.858253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.868125] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.868214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.868241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.868249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.868257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.868276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.878088] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.878203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.878240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.878249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.878256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.878275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.888216] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.888300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.888335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.888343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.888350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.888370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.898232] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.898364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.898390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.898399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.898406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.898425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.908230] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.908312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.908348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.908356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.908363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.908384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.918284] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.918374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.918401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.141 [2024-06-11 09:44:06.918410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.141 [2024-06-11 09:44:06.918424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.141 [2024-06-11 09:44:06.918443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.141 qpair failed and we were unable to recover it. 00:29:35.141 [2024-06-11 09:44:06.928309] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.141 [2024-06-11 09:44:06.928399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.141 [2024-06-11 09:44:06.928425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.142 [2024-06-11 09:44:06.928434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.142 [2024-06-11 09:44:06.928442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.142 [2024-06-11 09:44:06.928461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.142 qpair failed and we were unable to recover it. 00:29:35.142 [2024-06-11 09:44:06.938347] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.142 [2024-06-11 09:44:06.938461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.142 [2024-06-11 09:44:06.938486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.142 [2024-06-11 09:44:06.938494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.142 [2024-06-11 09:44:06.938501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.142 [2024-06-11 09:44:06.938523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.142 qpair failed and we were unable to recover it. 00:29:35.142 [2024-06-11 09:44:06.948374] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.142 [2024-06-11 09:44:06.948460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.142 [2024-06-11 09:44:06.948486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.142 [2024-06-11 09:44:06.948494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.142 [2024-06-11 09:44:06.948501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.142 [2024-06-11 09:44:06.948521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.142 qpair failed and we were unable to recover it. 00:29:35.404 [2024-06-11 09:44:06.958393] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.404 [2024-06-11 09:44:06.958474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.404 [2024-06-11 09:44:06.958500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.404 [2024-06-11 09:44:06.958509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.404 [2024-06-11 09:44:06.958516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.404 [2024-06-11 09:44:06.958536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.404 qpair failed and we were unable to recover it. 00:29:35.404 [2024-06-11 09:44:06.968430] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.404 [2024-06-11 09:44:06.968523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.404 [2024-06-11 09:44:06.968548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.404 [2024-06-11 09:44:06.968557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.404 [2024-06-11 09:44:06.968565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.404 [2024-06-11 09:44:06.968584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.404 qpair failed and we were unable to recover it. 00:29:35.405 [2024-06-11 09:44:06.978463] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.405 [2024-06-11 09:44:06.978562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.405 [2024-06-11 09:44:06.978585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.405 [2024-06-11 09:44:06.978593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.405 [2024-06-11 09:44:06.978600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.405 [2024-06-11 09:44:06.978619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.405 qpair failed and we were unable to recover it. 00:29:35.405 [2024-06-11 09:44:06.988475] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.405 [2024-06-11 09:44:06.988558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.405 [2024-06-11 09:44:06.988585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.405 [2024-06-11 09:44:06.988594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.405 [2024-06-11 09:44:06.988601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.405 [2024-06-11 09:44:06.988620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.405 qpair failed and we were unable to recover it. 00:29:35.405 [2024-06-11 09:44:06.998529] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.405 [2024-06-11 09:44:06.998616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.405 [2024-06-11 09:44:06.998643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.405 [2024-06-11 09:44:06.998652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.405 [2024-06-11 09:44:06.998658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.405 [2024-06-11 09:44:06.998678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.405 qpair failed and we were unable to recover it. 00:29:35.405 [2024-06-11 09:44:07.008565] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.405 [2024-06-11 09:44:07.008663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.405 [2024-06-11 09:44:07.008689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.405 [2024-06-11 09:44:07.008698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.405 [2024-06-11 09:44:07.008711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.405 [2024-06-11 09:44:07.008731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.405 qpair failed and we were unable to recover it. 00:29:35.405 [2024-06-11 09:44:07.018602] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.405 [2024-06-11 09:44:07.018691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.405 [2024-06-11 09:44:07.018716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.405 [2024-06-11 09:44:07.018724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.405 [2024-06-11 09:44:07.018731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.405 [2024-06-11 09:44:07.018751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.405 qpair failed and we were unable to recover it. 00:29:35.405 [2024-06-11 09:44:07.028635] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.405 [2024-06-11 09:44:07.028711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.405 [2024-06-11 09:44:07.028736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.405 [2024-06-11 09:44:07.028745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.405 [2024-06-11 09:44:07.028751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.405 [2024-06-11 09:44:07.028773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.405 qpair failed and we were unable to recover it. 00:29:35.405 [2024-06-11 09:44:07.038672] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.405 [2024-06-11 09:44:07.038788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.405 [2024-06-11 09:44:07.038813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.405 [2024-06-11 09:44:07.038822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.405 [2024-06-11 09:44:07.038829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.405 [2024-06-11 09:44:07.038849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.405 qpair failed and we were unable to recover it. 00:29:35.405 [2024-06-11 09:44:07.048703] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.405 [2024-06-11 09:44:07.048785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.405 [2024-06-11 09:44:07.048809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.405 [2024-06-11 09:44:07.048818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.405 [2024-06-11 09:44:07.048825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.405 [2024-06-11 09:44:07.048844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.405 qpair failed and we were unable to recover it. 00:29:35.405 [2024-06-11 09:44:07.058704] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.405 [2024-06-11 09:44:07.058870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.405 [2024-06-11 09:44:07.058898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.405 [2024-06-11 09:44:07.058907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.405 [2024-06-11 09:44:07.058915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.405 [2024-06-11 09:44:07.058935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.405 qpair failed and we were unable to recover it. 00:29:35.405 [2024-06-11 09:44:07.068774] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.405 [2024-06-11 09:44:07.068876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.405 [2024-06-11 09:44:07.068907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.405 [2024-06-11 09:44:07.068916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.405 [2024-06-11 09:44:07.068922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.405 [2024-06-11 09:44:07.068943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.405 qpair failed and we were unable to recover it. 00:29:35.405 [2024-06-11 09:44:07.078768] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.405 [2024-06-11 09:44:07.078929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.405 [2024-06-11 09:44:07.078973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.405 [2024-06-11 09:44:07.078983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.405 [2024-06-11 09:44:07.078990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.405 [2024-06-11 09:44:07.079016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.405 qpair failed and we were unable to recover it. 00:29:35.405 [2024-06-11 09:44:07.088840] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.405 [2024-06-11 09:44:07.088924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.405 [2024-06-11 09:44:07.088952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.405 [2024-06-11 09:44:07.088962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.405 [2024-06-11 09:44:07.088969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.405 [2024-06-11 09:44:07.088990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.405 qpair failed and we were unable to recover it. 00:29:35.405 [2024-06-11 09:44:07.098878] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.405 [2024-06-11 09:44:07.098976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.405 [2024-06-11 09:44:07.099002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.405 [2024-06-11 09:44:07.099020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.405 [2024-06-11 09:44:07.099028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.405 [2024-06-11 09:44:07.099048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.405 qpair failed and we were unable to recover it. 00:29:35.405 [2024-06-11 09:44:07.108764] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.406 [2024-06-11 09:44:07.108847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.406 [2024-06-11 09:44:07.108877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.406 [2024-06-11 09:44:07.108887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.406 [2024-06-11 09:44:07.108894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.406 [2024-06-11 09:44:07.108915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.406 qpair failed and we were unable to recover it. 00:29:35.406 [2024-06-11 09:44:07.118934] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.406 [2024-06-11 09:44:07.119032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.406 [2024-06-11 09:44:07.119059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.406 [2024-06-11 09:44:07.119067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.406 [2024-06-11 09:44:07.119074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.406 [2024-06-11 09:44:07.119095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.406 qpair failed and we were unable to recover it. 00:29:35.406 [2024-06-11 09:44:07.128964] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.406 [2024-06-11 09:44:07.129053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.406 [2024-06-11 09:44:07.129094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.406 [2024-06-11 09:44:07.129106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.406 [2024-06-11 09:44:07.129113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.406 [2024-06-11 09:44:07.129138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.406 qpair failed and we were unable to recover it. 00:29:35.406 [2024-06-11 09:44:07.138993] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.406 [2024-06-11 09:44:07.139102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.406 [2024-06-11 09:44:07.139142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.406 [2024-06-11 09:44:07.139153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.406 [2024-06-11 09:44:07.139160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.406 [2024-06-11 09:44:07.139185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.406 qpair failed and we were unable to recover it. 00:29:35.406 [2024-06-11 09:44:07.149033] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.406 [2024-06-11 09:44:07.149115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.406 [2024-06-11 09:44:07.149143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.406 [2024-06-11 09:44:07.149153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.406 [2024-06-11 09:44:07.149160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.406 [2024-06-11 09:44:07.149180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.406 qpair failed and we were unable to recover it. 00:29:35.406 [2024-06-11 09:44:07.159015] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.406 [2024-06-11 09:44:07.159120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.406 [2024-06-11 09:44:07.159151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.406 [2024-06-11 09:44:07.159161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.406 [2024-06-11 09:44:07.159167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.406 [2024-06-11 09:44:07.159189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.406 qpair failed and we were unable to recover it. 00:29:35.406 [2024-06-11 09:44:07.169066] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.406 [2024-06-11 09:44:07.169150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.406 [2024-06-11 09:44:07.169177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.406 [2024-06-11 09:44:07.169186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.406 [2024-06-11 09:44:07.169193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.406 [2024-06-11 09:44:07.169212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.406 qpair failed and we were unable to recover it. 00:29:35.406 [2024-06-11 09:44:07.179144] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.406 [2024-06-11 09:44:07.179245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.406 [2024-06-11 09:44:07.179271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.406 [2024-06-11 09:44:07.179280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.406 [2024-06-11 09:44:07.179287] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.406 [2024-06-11 09:44:07.179306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.406 qpair failed and we were unable to recover it. 00:29:35.406 [2024-06-11 09:44:07.189006] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.406 [2024-06-11 09:44:07.189084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.406 [2024-06-11 09:44:07.189118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.406 [2024-06-11 09:44:07.189127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.406 [2024-06-11 09:44:07.189135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.406 [2024-06-11 09:44:07.189154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.406 qpair failed and we were unable to recover it. 00:29:35.406 [2024-06-11 09:44:07.199173] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.406 [2024-06-11 09:44:07.199258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.406 [2024-06-11 09:44:07.199285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.406 [2024-06-11 09:44:07.199296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.406 [2024-06-11 09:44:07.199303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.406 [2024-06-11 09:44:07.199331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.406 qpair failed and we were unable to recover it. 00:29:35.406 [2024-06-11 09:44:07.209190] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.406 [2024-06-11 09:44:07.209269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.406 [2024-06-11 09:44:07.209296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.406 [2024-06-11 09:44:07.209306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.406 [2024-06-11 09:44:07.209313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.406 [2024-06-11 09:44:07.209343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.406 qpair failed and we were unable to recover it. 00:29:35.669 [2024-06-11 09:44:07.219222] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.669 [2024-06-11 09:44:07.219329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.669 [2024-06-11 09:44:07.219355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.669 [2024-06-11 09:44:07.219365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.669 [2024-06-11 09:44:07.219372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.669 [2024-06-11 09:44:07.219392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.669 qpair failed and we were unable to recover it. 00:29:35.669 [2024-06-11 09:44:07.229120] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.669 [2024-06-11 09:44:07.229202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.669 [2024-06-11 09:44:07.229228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.669 [2024-06-11 09:44:07.229238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.669 [2024-06-11 09:44:07.229245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.669 [2024-06-11 09:44:07.229272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.669 qpair failed and we were unable to recover it. 00:29:35.669 [2024-06-11 09:44:07.239268] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.669 [2024-06-11 09:44:07.239353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.669 [2024-06-11 09:44:07.239377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.669 [2024-06-11 09:44:07.239387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.669 [2024-06-11 09:44:07.239394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.669 [2024-06-11 09:44:07.239413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.669 qpair failed and we were unable to recover it. 00:29:35.669 [2024-06-11 09:44:07.249454] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.669 [2024-06-11 09:44:07.249561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.669 [2024-06-11 09:44:07.249588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.669 [2024-06-11 09:44:07.249596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.669 [2024-06-11 09:44:07.249603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.669 [2024-06-11 09:44:07.249623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.669 qpair failed and we were unable to recover it. 00:29:35.669 [2024-06-11 09:44:07.259349] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.669 [2024-06-11 09:44:07.259447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.669 [2024-06-11 09:44:07.259474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.670 [2024-06-11 09:44:07.259483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.670 [2024-06-11 09:44:07.259489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.670 [2024-06-11 09:44:07.259510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.670 qpair failed and we were unable to recover it. 00:29:35.670 [2024-06-11 09:44:07.269406] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.670 [2024-06-11 09:44:07.269497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.670 [2024-06-11 09:44:07.269522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.670 [2024-06-11 09:44:07.269531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.670 [2024-06-11 09:44:07.269539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.670 [2024-06-11 09:44:07.269557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.670 qpair failed and we were unable to recover it. 00:29:35.670 [2024-06-11 09:44:07.279378] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.670 [2024-06-11 09:44:07.279466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.670 [2024-06-11 09:44:07.279497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.670 [2024-06-11 09:44:07.279505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.670 [2024-06-11 09:44:07.279513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.670 [2024-06-11 09:44:07.279532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.670 qpair failed and we were unable to recover it. 00:29:35.670 [2024-06-11 09:44:07.289423] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.670 [2024-06-11 09:44:07.289546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.670 [2024-06-11 09:44:07.289572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.670 [2024-06-11 09:44:07.289581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.670 [2024-06-11 09:44:07.289587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.670 [2024-06-11 09:44:07.289606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.670 qpair failed and we were unable to recover it. 00:29:35.670 [2024-06-11 09:44:07.299470] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.670 [2024-06-11 09:44:07.299627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.670 [2024-06-11 09:44:07.299654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.670 [2024-06-11 09:44:07.299663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.670 [2024-06-11 09:44:07.299670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.670 [2024-06-11 09:44:07.299690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.670 qpair failed and we were unable to recover it. 00:29:35.670 [2024-06-11 09:44:07.309490] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.670 [2024-06-11 09:44:07.309577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.670 [2024-06-11 09:44:07.309605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.670 [2024-06-11 09:44:07.309614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.670 [2024-06-11 09:44:07.309621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.670 [2024-06-11 09:44:07.309641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.670 qpair failed and we were unable to recover it. 00:29:35.670 [2024-06-11 09:44:07.319514] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.670 [2024-06-11 09:44:07.319603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.670 [2024-06-11 09:44:07.319630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.670 [2024-06-11 09:44:07.319638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.670 [2024-06-11 09:44:07.319646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.670 [2024-06-11 09:44:07.319672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.670 qpair failed and we were unable to recover it. 00:29:35.670 [2024-06-11 09:44:07.329538] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.670 [2024-06-11 09:44:07.329622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.670 [2024-06-11 09:44:07.329647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.670 [2024-06-11 09:44:07.329656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.670 [2024-06-11 09:44:07.329663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.670 [2024-06-11 09:44:07.329682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.670 qpair failed and we were unable to recover it. 00:29:35.670 [2024-06-11 09:44:07.339539] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.670 [2024-06-11 09:44:07.339622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.670 [2024-06-11 09:44:07.339646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.670 [2024-06-11 09:44:07.339654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.670 [2024-06-11 09:44:07.339662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.670 [2024-06-11 09:44:07.339682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.670 qpair failed and we were unable to recover it. 00:29:35.670 [2024-06-11 09:44:07.349570] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.670 [2024-06-11 09:44:07.349653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.670 [2024-06-11 09:44:07.349677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.670 [2024-06-11 09:44:07.349687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.670 [2024-06-11 09:44:07.349694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.670 [2024-06-11 09:44:07.349713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.670 qpair failed and we were unable to recover it. 00:29:35.670 [2024-06-11 09:44:07.359631] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.670 [2024-06-11 09:44:07.359711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.670 [2024-06-11 09:44:07.359736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.670 [2024-06-11 09:44:07.359746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.670 [2024-06-11 09:44:07.359753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.670 [2024-06-11 09:44:07.359773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.670 qpair failed and we were unable to recover it. 00:29:35.670 [2024-06-11 09:44:07.369608] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.670 [2024-06-11 09:44:07.369698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.670 [2024-06-11 09:44:07.369722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.670 [2024-06-11 09:44:07.369732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.670 [2024-06-11 09:44:07.369739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.670 [2024-06-11 09:44:07.369759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.670 qpair failed and we were unable to recover it. 00:29:35.670 [2024-06-11 09:44:07.379594] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.670 [2024-06-11 09:44:07.379692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.671 [2024-06-11 09:44:07.379716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.671 [2024-06-11 09:44:07.379726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.671 [2024-06-11 09:44:07.379733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.671 [2024-06-11 09:44:07.379752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.671 qpair failed and we were unable to recover it. 00:29:35.671 [2024-06-11 09:44:07.389793] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.671 [2024-06-11 09:44:07.389921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.671 [2024-06-11 09:44:07.389947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.671 [2024-06-11 09:44:07.389955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.671 [2024-06-11 09:44:07.389962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.671 [2024-06-11 09:44:07.389980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.671 qpair failed and we were unable to recover it. 00:29:35.671 [2024-06-11 09:44:07.399732] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.671 [2024-06-11 09:44:07.399813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.671 [2024-06-11 09:44:07.399838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.671 [2024-06-11 09:44:07.399850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.671 [2024-06-11 09:44:07.399860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.671 [2024-06-11 09:44:07.399880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.671 qpair failed and we were unable to recover it. 00:29:35.671 [2024-06-11 09:44:07.409657] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.671 [2024-06-11 09:44:07.409742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.671 [2024-06-11 09:44:07.409767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.671 [2024-06-11 09:44:07.409777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.671 [2024-06-11 09:44:07.409791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.671 [2024-06-11 09:44:07.409810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.671 qpair failed and we were unable to recover it. 00:29:35.671 [2024-06-11 09:44:07.419756] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.671 [2024-06-11 09:44:07.419874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.671 [2024-06-11 09:44:07.419899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.671 [2024-06-11 09:44:07.419908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.671 [2024-06-11 09:44:07.419915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.671 [2024-06-11 09:44:07.419934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.671 qpair failed and we were unable to recover it. 00:29:35.671 [2024-06-11 09:44:07.429814] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.671 [2024-06-11 09:44:07.429900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.671 [2024-06-11 09:44:07.429926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.671 [2024-06-11 09:44:07.429936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.671 [2024-06-11 09:44:07.429943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.671 [2024-06-11 09:44:07.429962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.671 qpair failed and we were unable to recover it. 00:29:35.671 [2024-06-11 09:44:07.439867] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.671 [2024-06-11 09:44:07.439958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.671 [2024-06-11 09:44:07.439999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.671 [2024-06-11 09:44:07.440010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.671 [2024-06-11 09:44:07.440018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.671 [2024-06-11 09:44:07.440042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.671 qpair failed and we were unable to recover it. 00:29:35.671 [2024-06-11 09:44:07.449864] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.671 [2024-06-11 09:44:07.449950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.671 [2024-06-11 09:44:07.449990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.671 [2024-06-11 09:44:07.450000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.671 [2024-06-11 09:44:07.450007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.671 [2024-06-11 09:44:07.450032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.671 qpair failed and we were unable to recover it. 00:29:35.671 [2024-06-11 09:44:07.459922] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.671 [2024-06-11 09:44:07.460019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.671 [2024-06-11 09:44:07.460050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.671 [2024-06-11 09:44:07.460059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.671 [2024-06-11 09:44:07.460067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.671 [2024-06-11 09:44:07.460089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.671 qpair failed and we were unable to recover it. 00:29:35.671 [2024-06-11 09:44:07.469913] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.671 [2024-06-11 09:44:07.469993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.671 [2024-06-11 09:44:07.470021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.671 [2024-06-11 09:44:07.470031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.671 [2024-06-11 09:44:07.470039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.671 [2024-06-11 09:44:07.470059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.671 qpair failed and we were unable to recover it. 00:29:35.671 [2024-06-11 09:44:07.479953] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.671 [2024-06-11 09:44:07.480033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.671 [2024-06-11 09:44:07.480072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.671 [2024-06-11 09:44:07.480083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.671 [2024-06-11 09:44:07.480091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.671 [2024-06-11 09:44:07.480116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.671 qpair failed and we were unable to recover it. 00:29:35.933 [2024-06-11 09:44:07.489996] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.933 [2024-06-11 09:44:07.490082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.933 [2024-06-11 09:44:07.490121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.933 [2024-06-11 09:44:07.490132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.933 [2024-06-11 09:44:07.490139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.933 [2024-06-11 09:44:07.490164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.933 qpair failed and we were unable to recover it. 00:29:35.933 [2024-06-11 09:44:07.500057] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.933 [2024-06-11 09:44:07.500145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.933 [2024-06-11 09:44:07.500172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.933 [2024-06-11 09:44:07.500189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.933 [2024-06-11 09:44:07.500196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.933 [2024-06-11 09:44:07.500216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.933 qpair failed and we were unable to recover it. 00:29:35.933 [2024-06-11 09:44:07.510044] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.933 [2024-06-11 09:44:07.510118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.933 [2024-06-11 09:44:07.510145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.933 [2024-06-11 09:44:07.510153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.933 [2024-06-11 09:44:07.510161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.933 [2024-06-11 09:44:07.510182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.933 qpair failed and we were unable to recover it. 00:29:35.933 [2024-06-11 09:44:07.520078] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.933 [2024-06-11 09:44:07.520166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.933 [2024-06-11 09:44:07.520191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.933 [2024-06-11 09:44:07.520200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.933 [2024-06-11 09:44:07.520207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.933 [2024-06-11 09:44:07.520226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.933 qpair failed and we were unable to recover it. 00:29:35.933 [2024-06-11 09:44:07.530022] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.933 [2024-06-11 09:44:07.530102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.933 [2024-06-11 09:44:07.530127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.933 [2024-06-11 09:44:07.530136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.933 [2024-06-11 09:44:07.530143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.933 [2024-06-11 09:44:07.530163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.933 qpair failed and we were unable to recover it. 00:29:35.933 [2024-06-11 09:44:07.540132] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.933 [2024-06-11 09:44:07.540228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.933 [2024-06-11 09:44:07.540253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.933 [2024-06-11 09:44:07.540262] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.933 [2024-06-11 09:44:07.540270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.933 [2024-06-11 09:44:07.540289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.933 qpair failed and we were unable to recover it. 00:29:35.933 [2024-06-11 09:44:07.550177] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.933 [2024-06-11 09:44:07.550263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.933 [2024-06-11 09:44:07.550289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.933 [2024-06-11 09:44:07.550298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.933 [2024-06-11 09:44:07.550306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.933 [2024-06-11 09:44:07.550332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.933 qpair failed and we were unable to recover it. 00:29:35.933 [2024-06-11 09:44:07.560229] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.933 [2024-06-11 09:44:07.560304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.933 [2024-06-11 09:44:07.560336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.933 [2024-06-11 09:44:07.560344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.933 [2024-06-11 09:44:07.560351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.933 [2024-06-11 09:44:07.560372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.933 qpair failed and we were unable to recover it. 00:29:35.933 [2024-06-11 09:44:07.570249] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.934 [2024-06-11 09:44:07.570340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.934 [2024-06-11 09:44:07.570365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.934 [2024-06-11 09:44:07.570375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.934 [2024-06-11 09:44:07.570381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.934 [2024-06-11 09:44:07.570402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.934 qpair failed and we were unable to recover it. 00:29:35.934 [2024-06-11 09:44:07.580254] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.934 [2024-06-11 09:44:07.580358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.934 [2024-06-11 09:44:07.580383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.934 [2024-06-11 09:44:07.580391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.934 [2024-06-11 09:44:07.580399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.934 [2024-06-11 09:44:07.580418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.934 qpair failed and we were unable to recover it. 00:29:35.934 [2024-06-11 09:44:07.590288] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.934 [2024-06-11 09:44:07.590366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.934 [2024-06-11 09:44:07.590391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.934 [2024-06-11 09:44:07.590405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.934 [2024-06-11 09:44:07.590412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.934 [2024-06-11 09:44:07.590431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.934 qpair failed and we were unable to recover it. 00:29:35.934 [2024-06-11 09:44:07.600422] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.934 [2024-06-11 09:44:07.600546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.934 [2024-06-11 09:44:07.600571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.934 [2024-06-11 09:44:07.600579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.934 [2024-06-11 09:44:07.600586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.934 [2024-06-11 09:44:07.600605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.934 qpair failed and we were unable to recover it. 00:29:35.934 [2024-06-11 09:44:07.610400] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.934 [2024-06-11 09:44:07.610516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.934 [2024-06-11 09:44:07.610542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.934 [2024-06-11 09:44:07.610551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.934 [2024-06-11 09:44:07.610557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.934 [2024-06-11 09:44:07.610577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.934 qpair failed and we were unable to recover it. 00:29:35.934 [2024-06-11 09:44:07.620392] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.934 [2024-06-11 09:44:07.620492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.934 [2024-06-11 09:44:07.620516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.934 [2024-06-11 09:44:07.620525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.934 [2024-06-11 09:44:07.620532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.934 [2024-06-11 09:44:07.620551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.934 qpair failed and we were unable to recover it. 00:29:35.934 [2024-06-11 09:44:07.630471] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.934 [2024-06-11 09:44:07.630586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.934 [2024-06-11 09:44:07.630612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.934 [2024-06-11 09:44:07.630620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.934 [2024-06-11 09:44:07.630627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.934 [2024-06-11 09:44:07.630646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.934 qpair failed and we were unable to recover it. 00:29:35.934 [2024-06-11 09:44:07.640371] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.934 [2024-06-11 09:44:07.640455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.934 [2024-06-11 09:44:07.640478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.934 [2024-06-11 09:44:07.640488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.934 [2024-06-11 09:44:07.640494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.934 [2024-06-11 09:44:07.640513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.934 qpair failed and we were unable to recover it. 00:29:35.934 [2024-06-11 09:44:07.650529] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.934 [2024-06-11 09:44:07.650612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.934 [2024-06-11 09:44:07.650635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.934 [2024-06-11 09:44:07.650644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.934 [2024-06-11 09:44:07.650650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.934 [2024-06-11 09:44:07.650669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.934 qpair failed and we were unable to recover it. 00:29:35.934 [2024-06-11 09:44:07.660532] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.934 [2024-06-11 09:44:07.660630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.934 [2024-06-11 09:44:07.660656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.934 [2024-06-11 09:44:07.660665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.934 [2024-06-11 09:44:07.660671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.934 [2024-06-11 09:44:07.660690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.934 qpair failed and we were unable to recover it. 00:29:35.934 [2024-06-11 09:44:07.670521] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.934 [2024-06-11 09:44:07.670666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.934 [2024-06-11 09:44:07.670690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.934 [2024-06-11 09:44:07.670699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.934 [2024-06-11 09:44:07.670705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.934 [2024-06-11 09:44:07.670723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.934 qpair failed and we were unable to recover it. 00:29:35.934 [2024-06-11 09:44:07.680566] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.934 [2024-06-11 09:44:07.680649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.934 [2024-06-11 09:44:07.680679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.934 [2024-06-11 09:44:07.680687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.934 [2024-06-11 09:44:07.680695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.934 [2024-06-11 09:44:07.680713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.934 qpair failed and we were unable to recover it. 00:29:35.934 [2024-06-11 09:44:07.690623] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.934 [2024-06-11 09:44:07.690710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.934 [2024-06-11 09:44:07.690734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.934 [2024-06-11 09:44:07.690742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.934 [2024-06-11 09:44:07.690750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.934 [2024-06-11 09:44:07.690768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.934 qpair failed and we were unable to recover it. 00:29:35.934 [2024-06-11 09:44:07.700525] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.934 [2024-06-11 09:44:07.700650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.934 [2024-06-11 09:44:07.700678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.934 [2024-06-11 09:44:07.700687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.935 [2024-06-11 09:44:07.700694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.935 [2024-06-11 09:44:07.700714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.935 qpair failed and we were unable to recover it. 00:29:35.935 [2024-06-11 09:44:07.710545] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.935 [2024-06-11 09:44:07.710693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.935 [2024-06-11 09:44:07.710720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.935 [2024-06-11 09:44:07.710729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.935 [2024-06-11 09:44:07.710736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.935 [2024-06-11 09:44:07.710755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.935 qpair failed and we were unable to recover it. 00:29:35.935 [2024-06-11 09:44:07.720747] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.935 [2024-06-11 09:44:07.720829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.935 [2024-06-11 09:44:07.720853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.935 [2024-06-11 09:44:07.720863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.935 [2024-06-11 09:44:07.720870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.935 [2024-06-11 09:44:07.720894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.935 qpair failed and we were unable to recover it. 00:29:35.935 [2024-06-11 09:44:07.730714] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.935 [2024-06-11 09:44:07.730795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.935 [2024-06-11 09:44:07.730820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.935 [2024-06-11 09:44:07.730830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.935 [2024-06-11 09:44:07.730837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.935 [2024-06-11 09:44:07.730856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.935 qpair failed and we were unable to recover it. 00:29:35.935 [2024-06-11 09:44:07.740796] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.935 [2024-06-11 09:44:07.740903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.935 [2024-06-11 09:44:07.740929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.935 [2024-06-11 09:44:07.740938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.935 [2024-06-11 09:44:07.740945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:35.935 [2024-06-11 09:44:07.740963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:35.935 qpair failed and we were unable to recover it. 00:29:36.196 [2024-06-11 09:44:07.750850] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.196 [2024-06-11 09:44:07.750956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.196 [2024-06-11 09:44:07.750981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.196 [2024-06-11 09:44:07.750991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.196 [2024-06-11 09:44:07.750998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.196 [2024-06-11 09:44:07.751017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.196 qpair failed and we were unable to recover it. 00:29:36.196 [2024-06-11 09:44:07.760913] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.196 [2024-06-11 09:44:07.761006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.196 [2024-06-11 09:44:07.761047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.196 [2024-06-11 09:44:07.761058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.196 [2024-06-11 09:44:07.761066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.196 [2024-06-11 09:44:07.761093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.196 qpair failed and we were unable to recover it. 00:29:36.196 [2024-06-11 09:44:07.770861] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.196 [2024-06-11 09:44:07.770945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.197 [2024-06-11 09:44:07.770983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.197 [2024-06-11 09:44:07.770992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.197 [2024-06-11 09:44:07.770999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.197 [2024-06-11 09:44:07.771020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.197 qpair failed and we were unable to recover it. 00:29:36.197 [2024-06-11 09:44:07.780875] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.197 [2024-06-11 09:44:07.780975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.197 [2024-06-11 09:44:07.781007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.197 [2024-06-11 09:44:07.781017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.197 [2024-06-11 09:44:07.781024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.197 [2024-06-11 09:44:07.781045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.197 qpair failed and we were unable to recover it. 00:29:36.197 [2024-06-11 09:44:07.790921] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.197 [2024-06-11 09:44:07.791014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.197 [2024-06-11 09:44:07.791053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.197 [2024-06-11 09:44:07.791063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.197 [2024-06-11 09:44:07.791072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.197 [2024-06-11 09:44:07.791097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.197 qpair failed and we were unable to recover it. 00:29:36.197 [2024-06-11 09:44:07.800913] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.197 [2024-06-11 09:44:07.800995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.197 [2024-06-11 09:44:07.801022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.197 [2024-06-11 09:44:07.801031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.197 [2024-06-11 09:44:07.801040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.197 [2024-06-11 09:44:07.801060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.197 qpair failed and we were unable to recover it. 00:29:36.197 [2024-06-11 09:44:07.810951] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.197 [2024-06-11 09:44:07.811040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.197 [2024-06-11 09:44:07.811066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.197 [2024-06-11 09:44:07.811074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.197 [2024-06-11 09:44:07.811089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.197 [2024-06-11 09:44:07.811111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.197 qpair failed and we were unable to recover it. 00:29:36.197 [2024-06-11 09:44:07.820967] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.197 [2024-06-11 09:44:07.821058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.197 [2024-06-11 09:44:07.821083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.197 [2024-06-11 09:44:07.821091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.197 [2024-06-11 09:44:07.821098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.197 [2024-06-11 09:44:07.821118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.197 qpair failed and we were unable to recover it. 00:29:36.197 [2024-06-11 09:44:07.830962] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.197 [2024-06-11 09:44:07.831034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.197 [2024-06-11 09:44:07.831060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.197 [2024-06-11 09:44:07.831068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.197 [2024-06-11 09:44:07.831075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.197 [2024-06-11 09:44:07.831095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.197 qpair failed and we were unable to recover it. 00:29:36.197 [2024-06-11 09:44:07.841095] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.197 [2024-06-11 09:44:07.841178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.197 [2024-06-11 09:44:07.841203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.197 [2024-06-11 09:44:07.841210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.197 [2024-06-11 09:44:07.841217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.197 [2024-06-11 09:44:07.841237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.197 qpair failed and we were unable to recover it. 00:29:36.197 [2024-06-11 09:44:07.851126] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.197 [2024-06-11 09:44:07.851238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.197 [2024-06-11 09:44:07.851263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.197 [2024-06-11 09:44:07.851272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.197 [2024-06-11 09:44:07.851279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.197 [2024-06-11 09:44:07.851298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.197 qpair failed and we were unable to recover it. 00:29:36.197 [2024-06-11 09:44:07.861102] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.197 [2024-06-11 09:44:07.861204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.197 [2024-06-11 09:44:07.861230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.197 [2024-06-11 09:44:07.861240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.197 [2024-06-11 09:44:07.861247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.197 [2024-06-11 09:44:07.861267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.197 qpair failed and we were unable to recover it. 00:29:36.197 [2024-06-11 09:44:07.871113] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.197 [2024-06-11 09:44:07.871198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.197 [2024-06-11 09:44:07.871222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.197 [2024-06-11 09:44:07.871231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.197 [2024-06-11 09:44:07.871238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.197 [2024-06-11 09:44:07.871259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.197 qpair failed and we were unable to recover it. 00:29:36.197 [2024-06-11 09:44:07.881190] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.197 [2024-06-11 09:44:07.881348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.197 [2024-06-11 09:44:07.881375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.197 [2024-06-11 09:44:07.881383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.197 [2024-06-11 09:44:07.881390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.197 [2024-06-11 09:44:07.881409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.197 qpair failed and we were unable to recover it. 00:29:36.197 [2024-06-11 09:44:07.891189] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.197 [2024-06-11 09:44:07.891279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.197 [2024-06-11 09:44:07.891302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.198 [2024-06-11 09:44:07.891310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.198 [2024-06-11 09:44:07.891323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.198 [2024-06-11 09:44:07.891341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.198 qpair failed and we were unable to recover it. 00:29:36.198 [2024-06-11 09:44:07.901093] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.198 [2024-06-11 09:44:07.901193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.198 [2024-06-11 09:44:07.901214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.198 [2024-06-11 09:44:07.901227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.198 [2024-06-11 09:44:07.901234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.198 [2024-06-11 09:44:07.901251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.198 qpair failed and we were unable to recover it. 00:29:36.198 [2024-06-11 09:44:07.911244] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.198 [2024-06-11 09:44:07.911393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.198 [2024-06-11 09:44:07.911417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.198 [2024-06-11 09:44:07.911425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.198 [2024-06-11 09:44:07.911432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.198 [2024-06-11 09:44:07.911450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.198 qpair failed and we were unable to recover it. 00:29:36.198 [2024-06-11 09:44:07.921253] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.198 [2024-06-11 09:44:07.921330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.198 [2024-06-11 09:44:07.921350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.198 [2024-06-11 09:44:07.921359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.198 [2024-06-11 09:44:07.921365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.198 [2024-06-11 09:44:07.921382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.198 qpair failed and we were unable to recover it. 00:29:36.198 [2024-06-11 09:44:07.931296] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.198 [2024-06-11 09:44:07.931376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.198 [2024-06-11 09:44:07.931397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.198 [2024-06-11 09:44:07.931405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.198 [2024-06-11 09:44:07.931412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.198 [2024-06-11 09:44:07.931429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.198 qpair failed and we were unable to recover it. 00:29:36.198 [2024-06-11 09:44:07.941307] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.198 [2024-06-11 09:44:07.941420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.198 [2024-06-11 09:44:07.941439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.198 [2024-06-11 09:44:07.941446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.198 [2024-06-11 09:44:07.941454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.198 [2024-06-11 09:44:07.941471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.198 qpair failed and we were unable to recover it. 00:29:36.198 [2024-06-11 09:44:07.951359] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.198 [2024-06-11 09:44:07.951433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.198 [2024-06-11 09:44:07.951453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.198 [2024-06-11 09:44:07.951461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.198 [2024-06-11 09:44:07.951467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.198 [2024-06-11 09:44:07.951484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.198 qpair failed and we were unable to recover it. 00:29:36.198 [2024-06-11 09:44:07.961367] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.198 [2024-06-11 09:44:07.961438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.198 [2024-06-11 09:44:07.961457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.198 [2024-06-11 09:44:07.961465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.198 [2024-06-11 09:44:07.961472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.198 [2024-06-11 09:44:07.961489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.198 qpair failed and we were unable to recover it. 00:29:36.198 [2024-06-11 09:44:07.971434] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.198 [2024-06-11 09:44:07.971533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.198 [2024-06-11 09:44:07.971552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.198 [2024-06-11 09:44:07.971559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.198 [2024-06-11 09:44:07.971567] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.198 [2024-06-11 09:44:07.971583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.198 qpair failed and we were unable to recover it. 00:29:36.198 [2024-06-11 09:44:07.981515] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.198 [2024-06-11 09:44:07.981634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.198 [2024-06-11 09:44:07.981652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.198 [2024-06-11 09:44:07.981659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.198 [2024-06-11 09:44:07.981666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.198 [2024-06-11 09:44:07.981682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.198 qpair failed and we were unable to recover it. 00:29:36.198 [2024-06-11 09:44:07.991508] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.198 [2024-06-11 09:44:07.991583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.198 [2024-06-11 09:44:07.991602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.198 [2024-06-11 09:44:07.991615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.198 [2024-06-11 09:44:07.991623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.198 [2024-06-11 09:44:07.991638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.198 qpair failed and we were unable to recover it. 00:29:36.198 [2024-06-11 09:44:08.001576] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.198 [2024-06-11 09:44:08.001648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.198 [2024-06-11 09:44:08.001665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.198 [2024-06-11 09:44:08.001673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.198 [2024-06-11 09:44:08.001679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.198 [2024-06-11 09:44:08.001696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.198 qpair failed and we were unable to recover it. 00:29:36.460 [2024-06-11 09:44:08.011568] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.460 [2024-06-11 09:44:08.011646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.460 [2024-06-11 09:44:08.011664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.460 [2024-06-11 09:44:08.011671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.460 [2024-06-11 09:44:08.011678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.460 [2024-06-11 09:44:08.011694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.460 qpair failed and we were unable to recover it. 00:29:36.460 [2024-06-11 09:44:08.021546] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.460 [2024-06-11 09:44:08.021648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.460 [2024-06-11 09:44:08.021665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.460 [2024-06-11 09:44:08.021673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.460 [2024-06-11 09:44:08.021679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.460 [2024-06-11 09:44:08.021694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.460 qpair failed and we were unable to recover it. 00:29:36.460 [2024-06-11 09:44:08.031581] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.460 [2024-06-11 09:44:08.031663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.460 [2024-06-11 09:44:08.031681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.460 [2024-06-11 09:44:08.031689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.460 [2024-06-11 09:44:08.031695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.460 [2024-06-11 09:44:08.031711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.460 qpair failed and we were unable to recover it. 00:29:36.460 [2024-06-11 09:44:08.041593] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.460 [2024-06-11 09:44:08.041661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.460 [2024-06-11 09:44:08.041677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.460 [2024-06-11 09:44:08.041685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.460 [2024-06-11 09:44:08.041691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.460 [2024-06-11 09:44:08.041707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.460 qpair failed and we were unable to recover it. 00:29:36.460 [2024-06-11 09:44:08.051627] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.460 [2024-06-11 09:44:08.051695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.460 [2024-06-11 09:44:08.051711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.460 [2024-06-11 09:44:08.051719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.460 [2024-06-11 09:44:08.051725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.460 [2024-06-11 09:44:08.051740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.460 qpair failed and we were unable to recover it. 00:29:36.460 [2024-06-11 09:44:08.061637] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.460 [2024-06-11 09:44:08.061749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.460 [2024-06-11 09:44:08.061766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.460 [2024-06-11 09:44:08.061774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.460 [2024-06-11 09:44:08.061780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.460 [2024-06-11 09:44:08.061794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.460 qpair failed and we were unable to recover it. 00:29:36.460 [2024-06-11 09:44:08.071596] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.460 [2024-06-11 09:44:08.071689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.460 [2024-06-11 09:44:08.071705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.461 [2024-06-11 09:44:08.071712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.461 [2024-06-11 09:44:08.071719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.461 [2024-06-11 09:44:08.071733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.461 qpair failed and we were unable to recover it. 00:29:36.461 [2024-06-11 09:44:08.081692] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.461 [2024-06-11 09:44:08.081806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.461 [2024-06-11 09:44:08.081825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.461 [2024-06-11 09:44:08.081833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.461 [2024-06-11 09:44:08.081839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.461 [2024-06-11 09:44:08.081853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.461 qpair failed and we were unable to recover it. 00:29:36.461 [2024-06-11 09:44:08.091735] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.461 [2024-06-11 09:44:08.091829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.461 [2024-06-11 09:44:08.091846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.461 [2024-06-11 09:44:08.091854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.461 [2024-06-11 09:44:08.091860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.461 [2024-06-11 09:44:08.091875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.461 qpair failed and we were unable to recover it. 00:29:36.461 [2024-06-11 09:44:08.101767] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.461 [2024-06-11 09:44:08.101846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.461 [2024-06-11 09:44:08.101862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.461 [2024-06-11 09:44:08.101870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.461 [2024-06-11 09:44:08.101877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.461 [2024-06-11 09:44:08.101891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.461 qpair failed and we were unable to recover it. 00:29:36.461 [2024-06-11 09:44:08.111784] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.461 [2024-06-11 09:44:08.111852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.461 [2024-06-11 09:44:08.111869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.461 [2024-06-11 09:44:08.111876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.461 [2024-06-11 09:44:08.111882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.461 [2024-06-11 09:44:08.111897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.461 qpair failed and we were unable to recover it. 00:29:36.461 [2024-06-11 09:44:08.121820] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.461 [2024-06-11 09:44:08.121906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.461 [2024-06-11 09:44:08.121923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.461 [2024-06-11 09:44:08.121930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.461 [2024-06-11 09:44:08.121938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.461 [2024-06-11 09:44:08.121956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.461 qpair failed and we were unable to recover it. 00:29:36.461 [2024-06-11 09:44:08.131930] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.461 [2024-06-11 09:44:08.131999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.461 [2024-06-11 09:44:08.132015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.461 [2024-06-11 09:44:08.132022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.461 [2024-06-11 09:44:08.132028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.461 [2024-06-11 09:44:08.132042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.461 qpair failed and we were unable to recover it. 00:29:36.461 [2024-06-11 09:44:08.141854] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.461 [2024-06-11 09:44:08.141932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.461 [2024-06-11 09:44:08.141947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.461 [2024-06-11 09:44:08.141955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.461 [2024-06-11 09:44:08.141962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.461 [2024-06-11 09:44:08.141976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.461 qpair failed and we were unable to recover it. 00:29:36.461 [2024-06-11 09:44:08.151845] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.461 [2024-06-11 09:44:08.151913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.461 [2024-06-11 09:44:08.151927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.461 [2024-06-11 09:44:08.151935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.461 [2024-06-11 09:44:08.151941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.461 [2024-06-11 09:44:08.151955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.461 qpair failed and we were unable to recover it. 00:29:36.461 [2024-06-11 09:44:08.161925] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.461 [2024-06-11 09:44:08.162000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.461 [2024-06-11 09:44:08.162016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.461 [2024-06-11 09:44:08.162023] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.461 [2024-06-11 09:44:08.162029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.461 [2024-06-11 09:44:08.162044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.461 qpair failed and we were unable to recover it. 00:29:36.461 [2024-06-11 09:44:08.171938] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.461 [2024-06-11 09:44:08.172012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.461 [2024-06-11 09:44:08.172035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.461 [2024-06-11 09:44:08.172042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.461 [2024-06-11 09:44:08.172049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.461 [2024-06-11 09:44:08.172065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.461 qpair failed and we were unable to recover it. 00:29:36.461 [2024-06-11 09:44:08.181970] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.461 [2024-06-11 09:44:08.182049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.461 [2024-06-11 09:44:08.182074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.461 [2024-06-11 09:44:08.182083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.461 [2024-06-11 09:44:08.182091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.461 [2024-06-11 09:44:08.182110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.461 qpair failed and we were unable to recover it. 00:29:36.461 [2024-06-11 09:44:08.191961] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.461 [2024-06-11 09:44:08.192047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.461 [2024-06-11 09:44:08.192074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.461 [2024-06-11 09:44:08.192083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.461 [2024-06-11 09:44:08.192089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.461 [2024-06-11 09:44:08.192108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.461 qpair failed and we were unable to recover it. 00:29:36.462 [2024-06-11 09:44:08.202015] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.462 [2024-06-11 09:44:08.202089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.462 [2024-06-11 09:44:08.202114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.462 [2024-06-11 09:44:08.202124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.462 [2024-06-11 09:44:08.202130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.462 [2024-06-11 09:44:08.202149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.462 qpair failed and we were unable to recover it. 00:29:36.462 [2024-06-11 09:44:08.212059] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.462 [2024-06-11 09:44:08.212140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.462 [2024-06-11 09:44:08.212166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.462 [2024-06-11 09:44:08.212175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.462 [2024-06-11 09:44:08.212186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.462 [2024-06-11 09:44:08.212205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.462 qpair failed and we were unable to recover it. 00:29:36.462 [2024-06-11 09:44:08.222131] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.462 [2024-06-11 09:44:08.222222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.462 [2024-06-11 09:44:08.222240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.462 [2024-06-11 09:44:08.222248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.462 [2024-06-11 09:44:08.222254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.462 [2024-06-11 09:44:08.222270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.462 qpair failed and we were unable to recover it. 00:29:36.462 [2024-06-11 09:44:08.232112] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.462 [2024-06-11 09:44:08.232185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.462 [2024-06-11 09:44:08.232205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.462 [2024-06-11 09:44:08.232213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.462 [2024-06-11 09:44:08.232220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.462 [2024-06-11 09:44:08.232235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.462 qpair failed and we were unable to recover it. 00:29:36.462 [2024-06-11 09:44:08.242137] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.462 [2024-06-11 09:44:08.242208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.462 [2024-06-11 09:44:08.242225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.462 [2024-06-11 09:44:08.242232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.462 [2024-06-11 09:44:08.242238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.462 [2024-06-11 09:44:08.242254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.462 qpair failed and we were unable to recover it. 00:29:36.462 [2024-06-11 09:44:08.252175] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.462 [2024-06-11 09:44:08.252242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.462 [2024-06-11 09:44:08.252257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.462 [2024-06-11 09:44:08.252264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.462 [2024-06-11 09:44:08.252271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.462 [2024-06-11 09:44:08.252285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.462 qpair failed and we were unable to recover it. 00:29:36.462 [2024-06-11 09:44:08.262087] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.462 [2024-06-11 09:44:08.262208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.462 [2024-06-11 09:44:08.262225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.462 [2024-06-11 09:44:08.262232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.462 [2024-06-11 09:44:08.262239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.462 [2024-06-11 09:44:08.262253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.462 qpair failed and we were unable to recover it. 00:29:36.462 [2024-06-11 09:44:08.272228] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.462 [2024-06-11 09:44:08.272297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.462 [2024-06-11 09:44:08.272312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.462 [2024-06-11 09:44:08.272326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.462 [2024-06-11 09:44:08.272332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.462 [2024-06-11 09:44:08.272347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.462 qpair failed and we were unable to recover it. 00:29:36.724 [2024-06-11 09:44:08.282245] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.724 [2024-06-11 09:44:08.282319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.724 [2024-06-11 09:44:08.282334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.724 [2024-06-11 09:44:08.282342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.724 [2024-06-11 09:44:08.282349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.724 [2024-06-11 09:44:08.282364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-06-11 09:44:08.292255] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.724 [2024-06-11 09:44:08.292343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.724 [2024-06-11 09:44:08.292359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.724 [2024-06-11 09:44:08.292367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.724 [2024-06-11 09:44:08.292373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.724 [2024-06-11 09:44:08.292388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.724 [2024-06-11 09:44:08.302308] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.724 [2024-06-11 09:44:08.302396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.724 [2024-06-11 09:44:08.302411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.724 [2024-06-11 09:44:08.302419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.724 [2024-06-11 09:44:08.302433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.724 [2024-06-11 09:44:08.302449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.724 qpair failed and we were unable to recover it. 00:29:36.725 [2024-06-11 09:44:08.312352] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.725 [2024-06-11 09:44:08.312423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.725 [2024-06-11 09:44:08.312439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.725 [2024-06-11 09:44:08.312446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.725 [2024-06-11 09:44:08.312452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.725 [2024-06-11 09:44:08.312467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-06-11 09:44:08.322364] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.725 [2024-06-11 09:44:08.322436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.725 [2024-06-11 09:44:08.322452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.725 [2024-06-11 09:44:08.322459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.725 [2024-06-11 09:44:08.322465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.725 [2024-06-11 09:44:08.322480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-06-11 09:44:08.332382] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.725 [2024-06-11 09:44:08.332454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.725 [2024-06-11 09:44:08.332470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.725 [2024-06-11 09:44:08.332477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.725 [2024-06-11 09:44:08.332483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.725 [2024-06-11 09:44:08.332499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-06-11 09:44:08.342418] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.725 [2024-06-11 09:44:08.342497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.725 [2024-06-11 09:44:08.342513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.725 [2024-06-11 09:44:08.342520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.725 [2024-06-11 09:44:08.342527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.725 [2024-06-11 09:44:08.342542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-06-11 09:44:08.352484] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.725 [2024-06-11 09:44:08.352552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.725 [2024-06-11 09:44:08.352568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.725 [2024-06-11 09:44:08.352575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.725 [2024-06-11 09:44:08.352581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.725 [2024-06-11 09:44:08.352596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-06-11 09:44:08.362559] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.725 [2024-06-11 09:44:08.362626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.725 [2024-06-11 09:44:08.362642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.725 [2024-06-11 09:44:08.362649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.725 [2024-06-11 09:44:08.362656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.725 [2024-06-11 09:44:08.362671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-06-11 09:44:08.372504] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.725 [2024-06-11 09:44:08.372605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.725 [2024-06-11 09:44:08.372621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.725 [2024-06-11 09:44:08.372628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.725 [2024-06-11 09:44:08.372635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.725 [2024-06-11 09:44:08.372649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-06-11 09:44:08.382526] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.725 [2024-06-11 09:44:08.382607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.725 [2024-06-11 09:44:08.382623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.725 [2024-06-11 09:44:08.382630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.725 [2024-06-11 09:44:08.382637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.725 [2024-06-11 09:44:08.382651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-06-11 09:44:08.392589] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.725 [2024-06-11 09:44:08.392654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.725 [2024-06-11 09:44:08.392669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.725 [2024-06-11 09:44:08.392680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.725 [2024-06-11 09:44:08.392687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.725 [2024-06-11 09:44:08.392702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-06-11 09:44:08.402588] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.725 [2024-06-11 09:44:08.402689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.725 [2024-06-11 09:44:08.402705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.725 [2024-06-11 09:44:08.402713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.725 [2024-06-11 09:44:08.402719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.725 [2024-06-11 09:44:08.402733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-06-11 09:44:08.412615] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.725 [2024-06-11 09:44:08.412686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.725 [2024-06-11 09:44:08.412701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.725 [2024-06-11 09:44:08.412709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.725 [2024-06-11 09:44:08.412715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.725 [2024-06-11 09:44:08.412730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-06-11 09:44:08.422600] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.725 [2024-06-11 09:44:08.422675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.725 [2024-06-11 09:44:08.422691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.725 [2024-06-11 09:44:08.422698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.725 [2024-06-11 09:44:08.422705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.725 [2024-06-11 09:44:08.422720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-06-11 09:44:08.432726] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.725 [2024-06-11 09:44:08.432794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.725 [2024-06-11 09:44:08.432809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.725 [2024-06-11 09:44:08.432816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.725 [2024-06-11 09:44:08.432823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.725 [2024-06-11 09:44:08.432838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.725 qpair failed and we were unable to recover it. 00:29:36.725 [2024-06-11 09:44:08.442706] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.726 [2024-06-11 09:44:08.442777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.726 [2024-06-11 09:44:08.442793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.726 [2024-06-11 09:44:08.442800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.726 [2024-06-11 09:44:08.442806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.726 [2024-06-11 09:44:08.442821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-06-11 09:44:08.452784] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.726 [2024-06-11 09:44:08.452859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.726 [2024-06-11 09:44:08.452875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.726 [2024-06-11 09:44:08.452882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.726 [2024-06-11 09:44:08.452889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.726 [2024-06-11 09:44:08.452903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-06-11 09:44:08.462661] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.726 [2024-06-11 09:44:08.462737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.726 [2024-06-11 09:44:08.462753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.726 [2024-06-11 09:44:08.462761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.726 [2024-06-11 09:44:08.462767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.726 [2024-06-11 09:44:08.462782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-06-11 09:44:08.472805] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.726 [2024-06-11 09:44:08.472905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.726 [2024-06-11 09:44:08.472921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.726 [2024-06-11 09:44:08.472928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.726 [2024-06-11 09:44:08.472934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.726 [2024-06-11 09:44:08.472948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-06-11 09:44:08.482825] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.726 [2024-06-11 09:44:08.482892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.726 [2024-06-11 09:44:08.482911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.726 [2024-06-11 09:44:08.482918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.726 [2024-06-11 09:44:08.482925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.726 [2024-06-11 09:44:08.482940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-06-11 09:44:08.492841] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.726 [2024-06-11 09:44:08.492974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.726 [2024-06-11 09:44:08.492989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.726 [2024-06-11 09:44:08.492997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.726 [2024-06-11 09:44:08.493003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.726 [2024-06-11 09:44:08.493017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-06-11 09:44:08.502916] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.726 [2024-06-11 09:44:08.502996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.726 [2024-06-11 09:44:08.503021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.726 [2024-06-11 09:44:08.503031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.726 [2024-06-11 09:44:08.503037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.726 [2024-06-11 09:44:08.503057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-06-11 09:44:08.512918] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.726 [2024-06-11 09:44:08.512992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.726 [2024-06-11 09:44:08.513010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.726 [2024-06-11 09:44:08.513018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.726 [2024-06-11 09:44:08.513025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.726 [2024-06-11 09:44:08.513041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-06-11 09:44:08.522974] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.726 [2024-06-11 09:44:08.523069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.726 [2024-06-11 09:44:08.523086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.726 [2024-06-11 09:44:08.523093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.726 [2024-06-11 09:44:08.523099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.726 [2024-06-11 09:44:08.523119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.726 [2024-06-11 09:44:08.532963] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.726 [2024-06-11 09:44:08.533064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.726 [2024-06-11 09:44:08.533081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.726 [2024-06-11 09:44:08.533088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.726 [2024-06-11 09:44:08.533094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.726 [2024-06-11 09:44:08.533109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.726 qpair failed and we were unable to recover it. 00:29:36.988 [2024-06-11 09:44:08.542985] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.988 [2024-06-11 09:44:08.543060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.988 [2024-06-11 09:44:08.543076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.988 [2024-06-11 09:44:08.543083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.988 [2024-06-11 09:44:08.543090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.988 [2024-06-11 09:44:08.543104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.988 qpair failed and we were unable to recover it. 00:29:36.988 [2024-06-11 09:44:08.553034] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.988 [2024-06-11 09:44:08.553104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.988 [2024-06-11 09:44:08.553119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.988 [2024-06-11 09:44:08.553127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.988 [2024-06-11 09:44:08.553133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.988 [2024-06-11 09:44:08.553148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.988 qpair failed and we were unable to recover it. 00:29:36.988 [2024-06-11 09:44:08.563141] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.988 [2024-06-11 09:44:08.563249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.988 [2024-06-11 09:44:08.563275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.988 [2024-06-11 09:44:08.563284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.988 [2024-06-11 09:44:08.563291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.988 [2024-06-11 09:44:08.563310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.988 qpair failed and we were unable to recover it. 00:29:36.988 [2024-06-11 09:44:08.573086] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.988 [2024-06-11 09:44:08.573159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.988 [2024-06-11 09:44:08.573181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.988 [2024-06-11 09:44:08.573188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.988 [2024-06-11 09:44:08.573195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.988 [2024-06-11 09:44:08.573211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.988 qpair failed and we were unable to recover it. 00:29:36.988 [2024-06-11 09:44:08.583150] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.988 [2024-06-11 09:44:08.583223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.988 [2024-06-11 09:44:08.583238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.988 [2024-06-11 09:44:08.583246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.988 [2024-06-11 09:44:08.583252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.988 [2024-06-11 09:44:08.583267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.988 qpair failed and we were unable to recover it. 00:29:36.988 [2024-06-11 09:44:08.593179] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.988 [2024-06-11 09:44:08.593253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.988 [2024-06-11 09:44:08.593269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.988 [2024-06-11 09:44:08.593276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.988 [2024-06-11 09:44:08.593282] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.988 [2024-06-11 09:44:08.593298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.988 qpair failed and we were unable to recover it. 00:29:36.988 [2024-06-11 09:44:08.603215] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.988 [2024-06-11 09:44:08.603280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.988 [2024-06-11 09:44:08.603295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.988 [2024-06-11 09:44:08.603302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.988 [2024-06-11 09:44:08.603308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.988 [2024-06-11 09:44:08.603327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.988 qpair failed and we were unable to recover it. 00:29:36.988 [2024-06-11 09:44:08.613201] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.988 [2024-06-11 09:44:08.613274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.988 [2024-06-11 09:44:08.613292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.988 [2024-06-11 09:44:08.613300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.988 [2024-06-11 09:44:08.613311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.988 [2024-06-11 09:44:08.613336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.988 qpair failed and we were unable to recover it. 00:29:36.988 [2024-06-11 09:44:08.623219] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.988 [2024-06-11 09:44:08.623293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.988 [2024-06-11 09:44:08.623309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.988 [2024-06-11 09:44:08.623321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.988 [2024-06-11 09:44:08.623328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.988 [2024-06-11 09:44:08.623343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.988 qpair failed and we were unable to recover it. 00:29:36.988 [2024-06-11 09:44:08.633274] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.988 [2024-06-11 09:44:08.633384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.988 [2024-06-11 09:44:08.633400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.989 [2024-06-11 09:44:08.633407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.989 [2024-06-11 09:44:08.633413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.989 [2024-06-11 09:44:08.633428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.989 qpair failed and we were unable to recover it. 00:29:36.989 [2024-06-11 09:44:08.643165] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.989 [2024-06-11 09:44:08.643235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.989 [2024-06-11 09:44:08.643251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.989 [2024-06-11 09:44:08.643258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.989 [2024-06-11 09:44:08.643264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.989 [2024-06-11 09:44:08.643279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.989 qpair failed and we were unable to recover it. 00:29:36.989 [2024-06-11 09:44:08.653219] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.989 [2024-06-11 09:44:08.653291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.989 [2024-06-11 09:44:08.653307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.989 [2024-06-11 09:44:08.653319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.989 [2024-06-11 09:44:08.653326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.989 [2024-06-11 09:44:08.653341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.989 qpair failed and we were unable to recover it. 00:29:36.989 [2024-06-11 09:44:08.663348] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.989 [2024-06-11 09:44:08.663425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.989 [2024-06-11 09:44:08.663442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.989 [2024-06-11 09:44:08.663449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.989 [2024-06-11 09:44:08.663456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.989 [2024-06-11 09:44:08.663470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.989 qpair failed and we were unable to recover it. 00:29:36.989 [2024-06-11 09:44:08.673354] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.989 [2024-06-11 09:44:08.673429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.989 [2024-06-11 09:44:08.673444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.989 [2024-06-11 09:44:08.673451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.989 [2024-06-11 09:44:08.673457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.989 [2024-06-11 09:44:08.673472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.989 qpair failed and we were unable to recover it. 00:29:36.989 [2024-06-11 09:44:08.683407] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.989 [2024-06-11 09:44:08.683476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.989 [2024-06-11 09:44:08.683492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.989 [2024-06-11 09:44:08.683499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.989 [2024-06-11 09:44:08.683505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.989 [2024-06-11 09:44:08.683519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.989 qpair failed and we were unable to recover it. 00:29:36.989 [2024-06-11 09:44:08.693416] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.989 [2024-06-11 09:44:08.693492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.989 [2024-06-11 09:44:08.693508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.989 [2024-06-11 09:44:08.693515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.989 [2024-06-11 09:44:08.693521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.989 [2024-06-11 09:44:08.693536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.989 qpair failed and we were unable to recover it. 00:29:36.989 [2024-06-11 09:44:08.703422] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.989 [2024-06-11 09:44:08.703495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.989 [2024-06-11 09:44:08.703510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.989 [2024-06-11 09:44:08.703517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.989 [2024-06-11 09:44:08.703527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.989 [2024-06-11 09:44:08.703542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.989 qpair failed and we were unable to recover it. 00:29:36.989 [2024-06-11 09:44:08.713458] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.989 [2024-06-11 09:44:08.713527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.989 [2024-06-11 09:44:08.713543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.989 [2024-06-11 09:44:08.713551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.989 [2024-06-11 09:44:08.713557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.989 [2024-06-11 09:44:08.713572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.989 qpair failed and we were unable to recover it. 00:29:36.989 [2024-06-11 09:44:08.723507] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.989 [2024-06-11 09:44:08.723577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.989 [2024-06-11 09:44:08.723592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.989 [2024-06-11 09:44:08.723599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.989 [2024-06-11 09:44:08.723605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.989 [2024-06-11 09:44:08.723619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.989 qpair failed and we were unable to recover it. 00:29:36.989 [2024-06-11 09:44:08.733531] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.989 [2024-06-11 09:44:08.733603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.989 [2024-06-11 09:44:08.733618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.989 [2024-06-11 09:44:08.733626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.989 [2024-06-11 09:44:08.733633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.989 [2024-06-11 09:44:08.733648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.989 qpair failed and we were unable to recover it. 00:29:36.989 [2024-06-11 09:44:08.743561] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.989 [2024-06-11 09:44:08.743636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.989 [2024-06-11 09:44:08.743651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.989 [2024-06-11 09:44:08.743659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.989 [2024-06-11 09:44:08.743665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.989 [2024-06-11 09:44:08.743680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.989 qpair failed and we were unable to recover it. 00:29:36.989 [2024-06-11 09:44:08.753580] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.989 [2024-06-11 09:44:08.753649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.989 [2024-06-11 09:44:08.753664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.989 [2024-06-11 09:44:08.753672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.989 [2024-06-11 09:44:08.753678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.989 [2024-06-11 09:44:08.753694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.989 qpair failed and we were unable to recover it. 00:29:36.989 [2024-06-11 09:44:08.763615] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.989 [2024-06-11 09:44:08.763688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.989 [2024-06-11 09:44:08.763704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.989 [2024-06-11 09:44:08.763712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.990 [2024-06-11 09:44:08.763718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.990 [2024-06-11 09:44:08.763733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.990 qpair failed and we were unable to recover it. 00:29:36.990 [2024-06-11 09:44:08.773538] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.990 [2024-06-11 09:44:08.773671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.990 [2024-06-11 09:44:08.773688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.990 [2024-06-11 09:44:08.773696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.990 [2024-06-11 09:44:08.773702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.990 [2024-06-11 09:44:08.773718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.990 qpair failed and we were unable to recover it. 00:29:36.990 [2024-06-11 09:44:08.783657] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.990 [2024-06-11 09:44:08.783733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.990 [2024-06-11 09:44:08.783749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.990 [2024-06-11 09:44:08.783756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.990 [2024-06-11 09:44:08.783762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.990 [2024-06-11 09:44:08.783777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.990 qpair failed and we were unable to recover it. 00:29:36.990 [2024-06-11 09:44:08.793704] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.990 [2024-06-11 09:44:08.793777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.990 [2024-06-11 09:44:08.793792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.990 [2024-06-11 09:44:08.793802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.990 [2024-06-11 09:44:08.793809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:36.990 [2024-06-11 09:44:08.793824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.990 qpair failed and we were unable to recover it. 00:29:37.252 [2024-06-11 09:44:08.803672] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.252 [2024-06-11 09:44:08.803744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.252 [2024-06-11 09:44:08.803760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.252 [2024-06-11 09:44:08.803767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.252 [2024-06-11 09:44:08.803774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.252 [2024-06-11 09:44:08.803789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-06-11 09:44:08.813749] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.252 [2024-06-11 09:44:08.813821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.252 [2024-06-11 09:44:08.813837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.252 [2024-06-11 09:44:08.813844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.252 [2024-06-11 09:44:08.813850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.252 [2024-06-11 09:44:08.813866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-06-11 09:44:08.823791] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.252 [2024-06-11 09:44:08.823868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.252 [2024-06-11 09:44:08.823883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.252 [2024-06-11 09:44:08.823890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.252 [2024-06-11 09:44:08.823896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.252 [2024-06-11 09:44:08.823911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-06-11 09:44:08.833805] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.252 [2024-06-11 09:44:08.833878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.252 [2024-06-11 09:44:08.833894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.252 [2024-06-11 09:44:08.833902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.252 [2024-06-11 09:44:08.833908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.252 [2024-06-11 09:44:08.833923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-06-11 09:44:08.843828] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.252 [2024-06-11 09:44:08.843900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.252 [2024-06-11 09:44:08.843917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.252 [2024-06-11 09:44:08.843925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.252 [2024-06-11 09:44:08.843932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.252 [2024-06-11 09:44:08.843947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-06-11 09:44:08.853918] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.252 [2024-06-11 09:44:08.854027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.252 [2024-06-11 09:44:08.854043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.252 [2024-06-11 09:44:08.854050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.252 [2024-06-11 09:44:08.854057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.252 [2024-06-11 09:44:08.854072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-06-11 09:44:08.863877] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.252 [2024-06-11 09:44:08.863952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.252 [2024-06-11 09:44:08.863968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.252 [2024-06-11 09:44:08.863975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.252 [2024-06-11 09:44:08.863982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.252 [2024-06-11 09:44:08.863997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-06-11 09:44:08.873945] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.252 [2024-06-11 09:44:08.874067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.252 [2024-06-11 09:44:08.874082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.252 [2024-06-11 09:44:08.874089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.252 [2024-06-11 09:44:08.874095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.252 [2024-06-11 09:44:08.874110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.252 qpair failed and we were unable to recover it. 00:29:37.252 [2024-06-11 09:44:08.883933] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.252 [2024-06-11 09:44:08.883999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.252 [2024-06-11 09:44:08.884018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.252 [2024-06-11 09:44:08.884025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.252 [2024-06-11 09:44:08.884031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.253 [2024-06-11 09:44:08.884046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-06-11 09:44:08.893961] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.253 [2024-06-11 09:44:08.894030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.253 [2024-06-11 09:44:08.894045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.253 [2024-06-11 09:44:08.894053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.253 [2024-06-11 09:44:08.894059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.253 [2024-06-11 09:44:08.894073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-06-11 09:44:08.903992] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.253 [2024-06-11 09:44:08.904072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.253 [2024-06-11 09:44:08.904087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.253 [2024-06-11 09:44:08.904094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.253 [2024-06-11 09:44:08.904102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.253 [2024-06-11 09:44:08.904116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-06-11 09:44:08.914013] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.253 [2024-06-11 09:44:08.914090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.253 [2024-06-11 09:44:08.914108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.253 [2024-06-11 09:44:08.914115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.253 [2024-06-11 09:44:08.914123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.253 [2024-06-11 09:44:08.914137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-06-11 09:44:08.924113] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.253 [2024-06-11 09:44:08.924214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.253 [2024-06-11 09:44:08.924231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.253 [2024-06-11 09:44:08.924238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.253 [2024-06-11 09:44:08.924244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.253 [2024-06-11 09:44:08.924262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-06-11 09:44:08.934090] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.253 [2024-06-11 09:44:08.934192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.253 [2024-06-11 09:44:08.934208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.253 [2024-06-11 09:44:08.934216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.253 [2024-06-11 09:44:08.934222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.253 [2024-06-11 09:44:08.934237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-06-11 09:44:08.944160] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.253 [2024-06-11 09:44:08.944230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.253 [2024-06-11 09:44:08.944246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.253 [2024-06-11 09:44:08.944253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.253 [2024-06-11 09:44:08.944260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.253 [2024-06-11 09:44:08.944274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-06-11 09:44:08.954118] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.253 [2024-06-11 09:44:08.954194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.253 [2024-06-11 09:44:08.954210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.253 [2024-06-11 09:44:08.954217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.253 [2024-06-11 09:44:08.954224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.253 [2024-06-11 09:44:08.954238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-06-11 09:44:08.964180] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.253 [2024-06-11 09:44:08.964322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.253 [2024-06-11 09:44:08.964339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.253 [2024-06-11 09:44:08.964346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.253 [2024-06-11 09:44:08.964353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.253 [2024-06-11 09:44:08.964368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-06-11 09:44:08.974184] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.253 [2024-06-11 09:44:08.974256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.253 [2024-06-11 09:44:08.974275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.253 [2024-06-11 09:44:08.974282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.253 [2024-06-11 09:44:08.974288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.253 [2024-06-11 09:44:08.974303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-06-11 09:44:08.984236] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.253 [2024-06-11 09:44:08.984358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.253 [2024-06-11 09:44:08.984374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.253 [2024-06-11 09:44:08.984381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.253 [2024-06-11 09:44:08.984387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.253 [2024-06-11 09:44:08.984402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-06-11 09:44:08.994237] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.253 [2024-06-11 09:44:08.994310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.253 [2024-06-11 09:44:08.994331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.253 [2024-06-11 09:44:08.994338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.253 [2024-06-11 09:44:08.994344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.253 [2024-06-11 09:44:08.994359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-06-11 09:44:09.004276] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.253 [2024-06-11 09:44:09.004354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.253 [2024-06-11 09:44:09.004370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.253 [2024-06-11 09:44:09.004378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.253 [2024-06-11 09:44:09.004385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.253 [2024-06-11 09:44:09.004400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.253 qpair failed and we were unable to recover it. 00:29:37.253 [2024-06-11 09:44:09.014291] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.253 [2024-06-11 09:44:09.014367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.253 [2024-06-11 09:44:09.014383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.253 [2024-06-11 09:44:09.014390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.253 [2024-06-11 09:44:09.014397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.254 [2024-06-11 09:44:09.014419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-06-11 09:44:09.024366] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.254 [2024-06-11 09:44:09.024488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.254 [2024-06-11 09:44:09.024504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.254 [2024-06-11 09:44:09.024511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.254 [2024-06-11 09:44:09.024518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.254 [2024-06-11 09:44:09.024532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-06-11 09:44:09.034323] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.254 [2024-06-11 09:44:09.034397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.254 [2024-06-11 09:44:09.034414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.254 [2024-06-11 09:44:09.034421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.254 [2024-06-11 09:44:09.034428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.254 [2024-06-11 09:44:09.034443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-06-11 09:44:09.044412] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.254 [2024-06-11 09:44:09.044480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.254 [2024-06-11 09:44:09.044496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.254 [2024-06-11 09:44:09.044503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.254 [2024-06-11 09:44:09.044509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.254 [2024-06-11 09:44:09.044524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-06-11 09:44:09.054399] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.254 [2024-06-11 09:44:09.054470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.254 [2024-06-11 09:44:09.054485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.254 [2024-06-11 09:44:09.054492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.254 [2024-06-11 09:44:09.054499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.254 [2024-06-11 09:44:09.054514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.254 [2024-06-11 09:44:09.064430] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.254 [2024-06-11 09:44:09.064555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.254 [2024-06-11 09:44:09.064571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.254 [2024-06-11 09:44:09.064579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.254 [2024-06-11 09:44:09.064585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.254 [2024-06-11 09:44:09.064600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.254 qpair failed and we were unable to recover it. 00:29:37.516 [2024-06-11 09:44:09.074496] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.516 [2024-06-11 09:44:09.074577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.516 [2024-06-11 09:44:09.074592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.516 [2024-06-11 09:44:09.074601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.516 [2024-06-11 09:44:09.074608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.516 [2024-06-11 09:44:09.074622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.516 qpair failed and we were unable to recover it. 00:29:37.516 [2024-06-11 09:44:09.084489] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.516 [2024-06-11 09:44:09.084556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.516 [2024-06-11 09:44:09.084571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.516 [2024-06-11 09:44:09.084579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.516 [2024-06-11 09:44:09.084585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.516 [2024-06-11 09:44:09.084599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.516 qpair failed and we were unable to recover it. 00:29:37.516 [2024-06-11 09:44:09.094511] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.516 [2024-06-11 09:44:09.094579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.516 [2024-06-11 09:44:09.094594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.516 [2024-06-11 09:44:09.094601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.516 [2024-06-11 09:44:09.094607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.516 [2024-06-11 09:44:09.094622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.517 qpair failed and we were unable to recover it. 00:29:37.517 [2024-06-11 09:44:09.104550] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.517 [2024-06-11 09:44:09.104622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.517 [2024-06-11 09:44:09.104637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.517 [2024-06-11 09:44:09.104644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.517 [2024-06-11 09:44:09.104654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.517 [2024-06-11 09:44:09.104669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.517 qpair failed and we were unable to recover it. 00:29:37.517 [2024-06-11 09:44:09.114590] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.517 [2024-06-11 09:44:09.114664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.517 [2024-06-11 09:44:09.114679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.517 [2024-06-11 09:44:09.114687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.517 [2024-06-11 09:44:09.114693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.517 [2024-06-11 09:44:09.114709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.517 qpair failed and we were unable to recover it. 00:29:37.517 [2024-06-11 09:44:09.124611] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.517 [2024-06-11 09:44:09.124686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.517 [2024-06-11 09:44:09.124701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.517 [2024-06-11 09:44:09.124708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.517 [2024-06-11 09:44:09.124715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.517 [2024-06-11 09:44:09.124730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.517 qpair failed and we were unable to recover it. 00:29:37.517 [2024-06-11 09:44:09.134654] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.517 [2024-06-11 09:44:09.134725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.517 [2024-06-11 09:44:09.134741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.517 [2024-06-11 09:44:09.134748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.517 [2024-06-11 09:44:09.134754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.517 [2024-06-11 09:44:09.134769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.517 qpair failed and we were unable to recover it. 00:29:37.517 [2024-06-11 09:44:09.144559] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.517 [2024-06-11 09:44:09.144629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.517 [2024-06-11 09:44:09.144644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.517 [2024-06-11 09:44:09.144652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.517 [2024-06-11 09:44:09.144658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.517 [2024-06-11 09:44:09.144673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.517 qpair failed and we were unable to recover it. 00:29:37.517 [2024-06-11 09:44:09.154595] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.517 [2024-06-11 09:44:09.154683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.517 [2024-06-11 09:44:09.154701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.517 [2024-06-11 09:44:09.154708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.517 [2024-06-11 09:44:09.154714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.517 [2024-06-11 09:44:09.154730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.517 qpair failed and we were unable to recover it. 00:29:37.517 [2024-06-11 09:44:09.164734] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.517 [2024-06-11 09:44:09.164806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.517 [2024-06-11 09:44:09.164822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.517 [2024-06-11 09:44:09.164829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.517 [2024-06-11 09:44:09.164836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.517 [2024-06-11 09:44:09.164851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.517 qpair failed and we were unable to recover it. 00:29:37.517 [2024-06-11 09:44:09.174754] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.517 [2024-06-11 09:44:09.174857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.517 [2024-06-11 09:44:09.174873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.517 [2024-06-11 09:44:09.174881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.517 [2024-06-11 09:44:09.174887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.517 [2024-06-11 09:44:09.174902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.517 qpair failed and we were unable to recover it. 00:29:37.517 [2024-06-11 09:44:09.184700] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.517 [2024-06-11 09:44:09.184773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.517 [2024-06-11 09:44:09.184789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.517 [2024-06-11 09:44:09.184796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.517 [2024-06-11 09:44:09.184802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.517 [2024-06-11 09:44:09.184817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.517 qpair failed and we were unable to recover it. 00:29:37.517 [2024-06-11 09:44:09.194801] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.517 [2024-06-11 09:44:09.194867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.517 [2024-06-11 09:44:09.194882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.517 [2024-06-11 09:44:09.194892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.517 [2024-06-11 09:44:09.194899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.517 [2024-06-11 09:44:09.194914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.517 qpair failed and we were unable to recover it. 00:29:37.517 [2024-06-11 09:44:09.204816] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.517 [2024-06-11 09:44:09.204896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.517 [2024-06-11 09:44:09.204911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.517 [2024-06-11 09:44:09.204918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.517 [2024-06-11 09:44:09.204925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.517 [2024-06-11 09:44:09.204939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.517 qpair failed and we were unable to recover it. 00:29:37.517 [2024-06-11 09:44:09.214880] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.517 [2024-06-11 09:44:09.214953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.518 [2024-06-11 09:44:09.214969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.518 [2024-06-11 09:44:09.214976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.518 [2024-06-11 09:44:09.214982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.518 [2024-06-11 09:44:09.214997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.518 qpair failed and we were unable to recover it. 00:29:37.518 [2024-06-11 09:44:09.224980] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.518 [2024-06-11 09:44:09.225060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.518 [2024-06-11 09:44:09.225086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.518 [2024-06-11 09:44:09.225095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.518 [2024-06-11 09:44:09.225102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.518 [2024-06-11 09:44:09.225121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.518 qpair failed and we were unable to recover it. 00:29:37.518 [2024-06-11 09:44:09.234940] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.518 [2024-06-11 09:44:09.235021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.518 [2024-06-11 09:44:09.235038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.518 [2024-06-11 09:44:09.235046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.518 [2024-06-11 09:44:09.235053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.518 [2024-06-11 09:44:09.235068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.518 qpair failed and we were unable to recover it. 00:29:37.518 [2024-06-11 09:44:09.244913] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.518 [2024-06-11 09:44:09.244990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.518 [2024-06-11 09:44:09.245015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.518 [2024-06-11 09:44:09.245024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.518 [2024-06-11 09:44:09.245031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.518 [2024-06-11 09:44:09.245050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.518 qpair failed and we were unable to recover it. 00:29:37.518 [2024-06-11 09:44:09.254867] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.518 [2024-06-11 09:44:09.255044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.518 [2024-06-11 09:44:09.255063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.518 [2024-06-11 09:44:09.255071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.518 [2024-06-11 09:44:09.255077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.518 [2024-06-11 09:44:09.255093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.518 qpair failed and we were unable to recover it. 00:29:37.518 [2024-06-11 09:44:09.264910] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.518 [2024-06-11 09:44:09.264992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.518 [2024-06-11 09:44:09.265008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.518 [2024-06-11 09:44:09.265015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.518 [2024-06-11 09:44:09.265022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.518 [2024-06-11 09:44:09.265037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.518 qpair failed and we were unable to recover it. 00:29:37.518 [2024-06-11 09:44:09.275047] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.518 [2024-06-11 09:44:09.275119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.518 [2024-06-11 09:44:09.275144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.518 [2024-06-11 09:44:09.275153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.518 [2024-06-11 09:44:09.275160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.518 [2024-06-11 09:44:09.275179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.518 qpair failed and we were unable to recover it. 00:29:37.518 [2024-06-11 09:44:09.285108] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.518 [2024-06-11 09:44:09.285184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.518 [2024-06-11 09:44:09.285209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.518 [2024-06-11 09:44:09.285222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.518 [2024-06-11 09:44:09.285229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.518 [2024-06-11 09:44:09.285248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.518 qpair failed and we were unable to recover it. 00:29:37.518 [2024-06-11 09:44:09.294986] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.518 [2024-06-11 09:44:09.295059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.518 [2024-06-11 09:44:09.295076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.518 [2024-06-11 09:44:09.295084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.518 [2024-06-11 09:44:09.295090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.518 [2024-06-11 09:44:09.295107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.518 qpair failed and we were unable to recover it. 00:29:37.518 [2024-06-11 09:44:09.305113] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.518 [2024-06-11 09:44:09.305184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.518 [2024-06-11 09:44:09.305200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.518 [2024-06-11 09:44:09.305207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.518 [2024-06-11 09:44:09.305214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.518 [2024-06-11 09:44:09.305229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.518 qpair failed and we were unable to recover it. 00:29:37.518 [2024-06-11 09:44:09.315113] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.518 [2024-06-11 09:44:09.315190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.518 [2024-06-11 09:44:09.315206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.518 [2024-06-11 09:44:09.315214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.518 [2024-06-11 09:44:09.315221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.518 [2024-06-11 09:44:09.315236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.518 qpair failed and we were unable to recover it. 00:29:37.518 [2024-06-11 09:44:09.325193] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.518 [2024-06-11 09:44:09.325259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.518 [2024-06-11 09:44:09.325275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.518 [2024-06-11 09:44:09.325282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.518 [2024-06-11 09:44:09.325288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.518 [2024-06-11 09:44:09.325304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.518 qpair failed and we were unable to recover it. 00:29:37.780 [2024-06-11 09:44:09.335193] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.780 [2024-06-11 09:44:09.335266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.780 [2024-06-11 09:44:09.335282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.780 [2024-06-11 09:44:09.335289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.780 [2024-06-11 09:44:09.335296] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.780 [2024-06-11 09:44:09.335310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.780 qpair failed and we were unable to recover it. 00:29:37.780 [2024-06-11 09:44:09.345214] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.780 [2024-06-11 09:44:09.345288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.780 [2024-06-11 09:44:09.345304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.780 [2024-06-11 09:44:09.345311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.780 [2024-06-11 09:44:09.345324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.780 [2024-06-11 09:44:09.345339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.780 qpair failed and we were unable to recover it. 00:29:37.780 [2024-06-11 09:44:09.355252] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.780 [2024-06-11 09:44:09.355324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.780 [2024-06-11 09:44:09.355339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.780 [2024-06-11 09:44:09.355347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.780 [2024-06-11 09:44:09.355354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.780 [2024-06-11 09:44:09.355368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.780 qpair failed and we were unable to recover it. 00:29:37.780 [2024-06-11 09:44:09.365274] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.780 [2024-06-11 09:44:09.365347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.780 [2024-06-11 09:44:09.365364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.780 [2024-06-11 09:44:09.365371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.780 [2024-06-11 09:44:09.365378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.780 [2024-06-11 09:44:09.365392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.780 qpair failed and we were unable to recover it. 00:29:37.780 [2024-06-11 09:44:09.375305] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.780 [2024-06-11 09:44:09.375388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.780 [2024-06-11 09:44:09.375407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.780 [2024-06-11 09:44:09.375416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.780 [2024-06-11 09:44:09.375422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.780 [2024-06-11 09:44:09.375437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.780 qpair failed and we were unable to recover it. 00:29:37.780 [2024-06-11 09:44:09.385318] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.780 [2024-06-11 09:44:09.385393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.780 [2024-06-11 09:44:09.385408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.780 [2024-06-11 09:44:09.385416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.780 [2024-06-11 09:44:09.385422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.780 [2024-06-11 09:44:09.385437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.780 qpair failed and we were unable to recover it. 00:29:37.780 [2024-06-11 09:44:09.395247] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.780 [2024-06-11 09:44:09.395313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.780 [2024-06-11 09:44:09.395334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.780 [2024-06-11 09:44:09.395341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.780 [2024-06-11 09:44:09.395347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.780 [2024-06-11 09:44:09.395365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.780 qpair failed and we were unable to recover it. 00:29:37.780 [2024-06-11 09:44:09.405375] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.780 [2024-06-11 09:44:09.405479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.780 [2024-06-11 09:44:09.405495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.780 [2024-06-11 09:44:09.405502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.780 [2024-06-11 09:44:09.405509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.780 [2024-06-11 09:44:09.405523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.780 qpair failed and we were unable to recover it. 00:29:37.780 [2024-06-11 09:44:09.415420] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.780 [2024-06-11 09:44:09.415493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.780 [2024-06-11 09:44:09.415508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.781 [2024-06-11 09:44:09.415515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.781 [2024-06-11 09:44:09.415521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.781 [2024-06-11 09:44:09.415541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.781 qpair failed and we were unable to recover it. 00:29:37.781 [2024-06-11 09:44:09.425494] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.781 [2024-06-11 09:44:09.425609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.781 [2024-06-11 09:44:09.425625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.781 [2024-06-11 09:44:09.425632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.781 [2024-06-11 09:44:09.425638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.781 [2024-06-11 09:44:09.425654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.781 qpair failed and we were unable to recover it. 00:29:37.781 [2024-06-11 09:44:09.435456] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.781 [2024-06-11 09:44:09.435524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.781 [2024-06-11 09:44:09.435539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.781 [2024-06-11 09:44:09.435547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.781 [2024-06-11 09:44:09.435553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.781 [2024-06-11 09:44:09.435567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.781 qpair failed and we were unable to recover it. 00:29:37.781 [2024-06-11 09:44:09.445394] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.781 [2024-06-11 09:44:09.445468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.781 [2024-06-11 09:44:09.445483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.781 [2024-06-11 09:44:09.445490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.781 [2024-06-11 09:44:09.445496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.781 [2024-06-11 09:44:09.445511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.781 qpair failed and we were unable to recover it. 00:29:37.781 [2024-06-11 09:44:09.455564] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.781 [2024-06-11 09:44:09.455704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.781 [2024-06-11 09:44:09.455720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.781 [2024-06-11 09:44:09.455728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.781 [2024-06-11 09:44:09.455734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.781 [2024-06-11 09:44:09.455749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.781 qpair failed and we were unable to recover it. 00:29:37.781 [2024-06-11 09:44:09.465574] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.781 [2024-06-11 09:44:09.465653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.781 [2024-06-11 09:44:09.465673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.781 [2024-06-11 09:44:09.465681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.781 [2024-06-11 09:44:09.465687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.781 [2024-06-11 09:44:09.465702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.781 qpair failed and we were unable to recover it. 00:29:37.781 [2024-06-11 09:44:09.475587] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.781 [2024-06-11 09:44:09.475656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.781 [2024-06-11 09:44:09.475672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.781 [2024-06-11 09:44:09.475679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.781 [2024-06-11 09:44:09.475685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.781 [2024-06-11 09:44:09.475700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.781 qpair failed and we were unable to recover it. 00:29:37.781 [2024-06-11 09:44:09.485519] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.781 [2024-06-11 09:44:09.485657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.781 [2024-06-11 09:44:09.485682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.781 [2024-06-11 09:44:09.485690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.781 [2024-06-11 09:44:09.485696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.781 [2024-06-11 09:44:09.485710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.781 qpair failed and we were unable to recover it. 00:29:37.781 [2024-06-11 09:44:09.495656] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.781 [2024-06-11 09:44:09.495765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.781 [2024-06-11 09:44:09.495781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.781 [2024-06-11 09:44:09.495788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.781 [2024-06-11 09:44:09.495794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.781 [2024-06-11 09:44:09.495808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.781 qpair failed and we were unable to recover it. 00:29:37.781 [2024-06-11 09:44:09.505682] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.781 [2024-06-11 09:44:09.505755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.781 [2024-06-11 09:44:09.505770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.781 [2024-06-11 09:44:09.505778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.781 [2024-06-11 09:44:09.505788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.781 [2024-06-11 09:44:09.505802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.781 qpair failed and we were unable to recover it. 00:29:37.781 [2024-06-11 09:44:09.515711] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.781 [2024-06-11 09:44:09.515778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.781 [2024-06-11 09:44:09.515794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.781 [2024-06-11 09:44:09.515802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.781 [2024-06-11 09:44:09.515808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.781 [2024-06-11 09:44:09.515823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.781 qpair failed and we were unable to recover it. 00:29:37.781 [2024-06-11 09:44:09.525756] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.781 [2024-06-11 09:44:09.525840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.781 [2024-06-11 09:44:09.525857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.781 [2024-06-11 09:44:09.525864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.781 [2024-06-11 09:44:09.525870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.781 [2024-06-11 09:44:09.525885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.781 qpair failed and we were unable to recover it. 00:29:37.781 [2024-06-11 09:44:09.535653] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.781 [2024-06-11 09:44:09.535725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.781 [2024-06-11 09:44:09.535740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.781 [2024-06-11 09:44:09.535748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.781 [2024-06-11 09:44:09.535754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.781 [2024-06-11 09:44:09.535769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.781 qpair failed and we were unable to recover it. 00:29:37.781 [2024-06-11 09:44:09.545751] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.781 [2024-06-11 09:44:09.545821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.781 [2024-06-11 09:44:09.545836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.781 [2024-06-11 09:44:09.545843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.781 [2024-06-11 09:44:09.545850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.782 [2024-06-11 09:44:09.545864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.782 qpair failed and we were unable to recover it. 00:29:37.782 [2024-06-11 09:44:09.555801] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.782 [2024-06-11 09:44:09.555880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.782 [2024-06-11 09:44:09.555896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.782 [2024-06-11 09:44:09.555903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.782 [2024-06-11 09:44:09.555910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.782 [2024-06-11 09:44:09.555925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.782 qpair failed and we were unable to recover it. 00:29:37.782 [2024-06-11 09:44:09.565851] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.782 [2024-06-11 09:44:09.565922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.782 [2024-06-11 09:44:09.565938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.782 [2024-06-11 09:44:09.565945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.782 [2024-06-11 09:44:09.565952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.782 [2024-06-11 09:44:09.565967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.782 qpair failed and we were unable to recover it. 00:29:37.782 [2024-06-11 09:44:09.575746] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.782 [2024-06-11 09:44:09.575818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.782 [2024-06-11 09:44:09.575833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.782 [2024-06-11 09:44:09.575840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.782 [2024-06-11 09:44:09.575847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.782 [2024-06-11 09:44:09.575861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.782 qpair failed and we were unable to recover it. 00:29:37.782 [2024-06-11 09:44:09.585884] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:37.782 [2024-06-11 09:44:09.585958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:37.782 [2024-06-11 09:44:09.585974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:37.782 [2024-06-11 09:44:09.585981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:37.782 [2024-06-11 09:44:09.585988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:37.782 [2024-06-11 09:44:09.586003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.782 qpair failed and we were unable to recover it. 00:29:38.043 [2024-06-11 09:44:09.595981] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.043 [2024-06-11 09:44:09.596046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.043 [2024-06-11 09:44:09.596064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.043 [2024-06-11 09:44:09.596075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.043 [2024-06-11 09:44:09.596081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.043 [2024-06-11 09:44:09.596097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-06-11 09:44:09.605938] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.043 [2024-06-11 09:44:09.606014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.043 [2024-06-11 09:44:09.606039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.043 [2024-06-11 09:44:09.606048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.043 [2024-06-11 09:44:09.606055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.043 [2024-06-11 09:44:09.606075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.043 qpair failed and we were unable to recover it. 00:29:38.043 [2024-06-11 09:44:09.615984] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.043 [2024-06-11 09:44:09.616062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.044 [2024-06-11 09:44:09.616087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.044 [2024-06-11 09:44:09.616095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.044 [2024-06-11 09:44:09.616103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.044 [2024-06-11 09:44:09.616123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-06-11 09:44:09.625981] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.044 [2024-06-11 09:44:09.626060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.044 [2024-06-11 09:44:09.626077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.044 [2024-06-11 09:44:09.626084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.044 [2024-06-11 09:44:09.626091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.044 [2024-06-11 09:44:09.626106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-06-11 09:44:09.636024] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.044 [2024-06-11 09:44:09.636094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.044 [2024-06-11 09:44:09.636110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.044 [2024-06-11 09:44:09.636117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.044 [2024-06-11 09:44:09.636123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.044 [2024-06-11 09:44:09.636138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-06-11 09:44:09.646047] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.044 [2024-06-11 09:44:09.646122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.044 [2024-06-11 09:44:09.646147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.044 [2024-06-11 09:44:09.646155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.044 [2024-06-11 09:44:09.646162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.044 [2024-06-11 09:44:09.646181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-06-11 09:44:09.656072] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.044 [2024-06-11 09:44:09.656171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.044 [2024-06-11 09:44:09.656188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.044 [2024-06-11 09:44:09.656195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.044 [2024-06-11 09:44:09.656202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.044 [2024-06-11 09:44:09.656218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-06-11 09:44:09.666122] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.044 [2024-06-11 09:44:09.666205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.044 [2024-06-11 09:44:09.666221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.044 [2024-06-11 09:44:09.666228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.044 [2024-06-11 09:44:09.666234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.044 [2024-06-11 09:44:09.666249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-06-11 09:44:09.676123] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.044 [2024-06-11 09:44:09.676193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.044 [2024-06-11 09:44:09.676208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.044 [2024-06-11 09:44:09.676215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.044 [2024-06-11 09:44:09.676221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.044 [2024-06-11 09:44:09.676236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-06-11 09:44:09.686158] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.044 [2024-06-11 09:44:09.686225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.044 [2024-06-11 09:44:09.686240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.044 [2024-06-11 09:44:09.686252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.044 [2024-06-11 09:44:09.686258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.044 [2024-06-11 09:44:09.686273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-06-11 09:44:09.696190] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.044 [2024-06-11 09:44:09.696261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.044 [2024-06-11 09:44:09.696276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.044 [2024-06-11 09:44:09.696284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.044 [2024-06-11 09:44:09.696290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.044 [2024-06-11 09:44:09.696305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-06-11 09:44:09.706202] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.044 [2024-06-11 09:44:09.706277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.044 [2024-06-11 09:44:09.706292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.044 [2024-06-11 09:44:09.706299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.044 [2024-06-11 09:44:09.706306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.044 [2024-06-11 09:44:09.706325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-06-11 09:44:09.716233] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.044 [2024-06-11 09:44:09.716303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.044 [2024-06-11 09:44:09.716322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.044 [2024-06-11 09:44:09.716330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.044 [2024-06-11 09:44:09.716336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.044 [2024-06-11 09:44:09.716351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-06-11 09:44:09.726282] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.044 [2024-06-11 09:44:09.726354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.044 [2024-06-11 09:44:09.726369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.044 [2024-06-11 09:44:09.726376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.044 [2024-06-11 09:44:09.726383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.044 [2024-06-11 09:44:09.726398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-06-11 09:44:09.736298] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.044 [2024-06-11 09:44:09.736374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.044 [2024-06-11 09:44:09.736391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.044 [2024-06-11 09:44:09.736398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.044 [2024-06-11 09:44:09.736404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.044 [2024-06-11 09:44:09.736420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.044 qpair failed and we were unable to recover it. 00:29:38.044 [2024-06-11 09:44:09.746383] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.044 [2024-06-11 09:44:09.746456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.044 [2024-06-11 09:44:09.746471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.044 [2024-06-11 09:44:09.746479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.044 [2024-06-11 09:44:09.746485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.045 [2024-06-11 09:44:09.746500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-06-11 09:44:09.756355] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.045 [2024-06-11 09:44:09.756425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.045 [2024-06-11 09:44:09.756441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.045 [2024-06-11 09:44:09.756448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.045 [2024-06-11 09:44:09.756454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.045 [2024-06-11 09:44:09.756469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-06-11 09:44:09.766415] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.045 [2024-06-11 09:44:09.766486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.045 [2024-06-11 09:44:09.766502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.045 [2024-06-11 09:44:09.766510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.045 [2024-06-11 09:44:09.766516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.045 [2024-06-11 09:44:09.766531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-06-11 09:44:09.776440] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.045 [2024-06-11 09:44:09.776516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.045 [2024-06-11 09:44:09.776535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.045 [2024-06-11 09:44:09.776542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.045 [2024-06-11 09:44:09.776549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.045 [2024-06-11 09:44:09.776563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-06-11 09:44:09.786467] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.045 [2024-06-11 09:44:09.786546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.045 [2024-06-11 09:44:09.786561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.045 [2024-06-11 09:44:09.786569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.045 [2024-06-11 09:44:09.786575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.045 [2024-06-11 09:44:09.786590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-06-11 09:44:09.796500] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.045 [2024-06-11 09:44:09.796576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.045 [2024-06-11 09:44:09.796592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.045 [2024-06-11 09:44:09.796599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.045 [2024-06-11 09:44:09.796605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.045 [2024-06-11 09:44:09.796619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-06-11 09:44:09.806485] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.045 [2024-06-11 09:44:09.806556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.045 [2024-06-11 09:44:09.806574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.045 [2024-06-11 09:44:09.806582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.045 [2024-06-11 09:44:09.806588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.045 [2024-06-11 09:44:09.806603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-06-11 09:44:09.816532] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.045 [2024-06-11 09:44:09.816602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.045 [2024-06-11 09:44:09.816618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.045 [2024-06-11 09:44:09.816625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.045 [2024-06-11 09:44:09.816631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.045 [2024-06-11 09:44:09.816650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-06-11 09:44:09.826553] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.045 [2024-06-11 09:44:09.826629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.045 [2024-06-11 09:44:09.826645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.045 [2024-06-11 09:44:09.826652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.045 [2024-06-11 09:44:09.826658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.045 [2024-06-11 09:44:09.826673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-06-11 09:44:09.836466] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.045 [2024-06-11 09:44:09.836536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.045 [2024-06-11 09:44:09.836551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.045 [2024-06-11 09:44:09.836559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.045 [2024-06-11 09:44:09.836565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.045 [2024-06-11 09:44:09.836579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-06-11 09:44:09.846668] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.045 [2024-06-11 09:44:09.846738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.045 [2024-06-11 09:44:09.846753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.045 [2024-06-11 09:44:09.846760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.045 [2024-06-11 09:44:09.846767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.045 [2024-06-11 09:44:09.846781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.045 [2024-06-11 09:44:09.856638] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.045 [2024-06-11 09:44:09.856709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.045 [2024-06-11 09:44:09.856724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.045 [2024-06-11 09:44:09.856731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.045 [2024-06-11 09:44:09.856737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.045 [2024-06-11 09:44:09.856751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.045 qpair failed and we were unable to recover it. 00:29:38.308 [2024-06-11 09:44:09.866681] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.308 [2024-06-11 09:44:09.866756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.308 [2024-06-11 09:44:09.866775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.308 [2024-06-11 09:44:09.866782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.308 [2024-06-11 09:44:09.866788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.308 [2024-06-11 09:44:09.866803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.308 qpair failed and we were unable to recover it. 00:29:38.308 [2024-06-11 09:44:09.876689] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.308 [2024-06-11 09:44:09.876758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.308 [2024-06-11 09:44:09.876773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.308 [2024-06-11 09:44:09.876780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.308 [2024-06-11 09:44:09.876786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.308 [2024-06-11 09:44:09.876801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.308 qpair failed and we were unable to recover it. 00:29:38.308 [2024-06-11 09:44:09.886713] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.308 [2024-06-11 09:44:09.886786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.308 [2024-06-11 09:44:09.886801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.308 [2024-06-11 09:44:09.886808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.308 [2024-06-11 09:44:09.886814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.308 [2024-06-11 09:44:09.886829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.308 qpair failed and we were unable to recover it. 00:29:38.308 [2024-06-11 09:44:09.896650] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.309 [2024-06-11 09:44:09.896726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.309 [2024-06-11 09:44:09.896743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.309 [2024-06-11 09:44:09.896750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.309 [2024-06-11 09:44:09.896756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.309 [2024-06-11 09:44:09.896772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.309 qpair failed and we were unable to recover it. 00:29:38.309 [2024-06-11 09:44:09.906817] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.309 [2024-06-11 09:44:09.906895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.309 [2024-06-11 09:44:09.906911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.309 [2024-06-11 09:44:09.906918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.309 [2024-06-11 09:44:09.906928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.309 [2024-06-11 09:44:09.906943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.309 qpair failed and we were unable to recover it. 00:29:38.309 [2024-06-11 09:44:09.916801] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.309 [2024-06-11 09:44:09.916869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.309 [2024-06-11 09:44:09.916884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.309 [2024-06-11 09:44:09.916891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.309 [2024-06-11 09:44:09.916897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.309 [2024-06-11 09:44:09.916912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.309 qpair failed and we were unable to recover it. 00:29:38.309 [2024-06-11 09:44:09.926847] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.309 [2024-06-11 09:44:09.926914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.309 [2024-06-11 09:44:09.926929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.309 [2024-06-11 09:44:09.926936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.309 [2024-06-11 09:44:09.926942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.309 [2024-06-11 09:44:09.926956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.309 qpair failed and we were unable to recover it. 00:29:38.309 [2024-06-11 09:44:09.936879] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.309 [2024-06-11 09:44:09.936948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.309 [2024-06-11 09:44:09.936964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.309 [2024-06-11 09:44:09.936972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.309 [2024-06-11 09:44:09.936978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.309 [2024-06-11 09:44:09.936992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.309 qpair failed and we were unable to recover it. 00:29:38.309 [2024-06-11 09:44:09.946945] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.309 [2024-06-11 09:44:09.947061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.309 [2024-06-11 09:44:09.947086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.309 [2024-06-11 09:44:09.947095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.309 [2024-06-11 09:44:09.947102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.309 [2024-06-11 09:44:09.947121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.309 qpair failed and we were unable to recover it. 00:29:38.309 [2024-06-11 09:44:09.956828] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.309 [2024-06-11 09:44:09.956909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.309 [2024-06-11 09:44:09.956927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.309 [2024-06-11 09:44:09.956934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.309 [2024-06-11 09:44:09.956942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.309 [2024-06-11 09:44:09.956960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.309 qpair failed and we were unable to recover it. 00:29:38.309 [2024-06-11 09:44:09.966939] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.309 [2024-06-11 09:44:09.967027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.309 [2024-06-11 09:44:09.967043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.309 [2024-06-11 09:44:09.967051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.309 [2024-06-11 09:44:09.967058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.309 [2024-06-11 09:44:09.967073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.309 qpair failed and we were unable to recover it. 00:29:38.309 [2024-06-11 09:44:09.976970] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.309 [2024-06-11 09:44:09.977038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.309 [2024-06-11 09:44:09.977054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.309 [2024-06-11 09:44:09.977061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.309 [2024-06-11 09:44:09.977067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.309 [2024-06-11 09:44:09.977082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.309 qpair failed and we were unable to recover it. 00:29:38.309 [2024-06-11 09:44:09.986975] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.309 [2024-06-11 09:44:09.987066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.309 [2024-06-11 09:44:09.987081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.309 [2024-06-11 09:44:09.987088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.309 [2024-06-11 09:44:09.987094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.309 [2024-06-11 09:44:09.987109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.309 qpair failed and we were unable to recover it. 00:29:38.309 [2024-06-11 09:44:09.997033] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.309 [2024-06-11 09:44:09.997103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.309 [2024-06-11 09:44:09.997118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.309 [2024-06-11 09:44:09.997125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.309 [2024-06-11 09:44:09.997136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.309 [2024-06-11 09:44:09.997151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.309 qpair failed and we were unable to recover it. 00:29:38.309 [2024-06-11 09:44:10.007054] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.309 [2024-06-11 09:44:10.007127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.309 [2024-06-11 09:44:10.007146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.309 [2024-06-11 09:44:10.007154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.309 [2024-06-11 09:44:10.007160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.309 [2024-06-11 09:44:10.007175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.309 qpair failed and we were unable to recover it. 00:29:38.309 [2024-06-11 09:44:10.017015] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.309 [2024-06-11 09:44:10.017103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.309 [2024-06-11 09:44:10.017128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.309 [2024-06-11 09:44:10.017137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.309 [2024-06-11 09:44:10.017144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.309 [2024-06-11 09:44:10.017163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.309 qpair failed and we were unable to recover it. 00:29:38.309 [2024-06-11 09:44:10.027087] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.309 [2024-06-11 09:44:10.027191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.309 [2024-06-11 09:44:10.027209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.310 [2024-06-11 09:44:10.027217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.310 [2024-06-11 09:44:10.027223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.310 [2024-06-11 09:44:10.027240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.310 qpair failed and we were unable to recover it. 00:29:38.310 [2024-06-11 09:44:10.037192] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.310 [2024-06-11 09:44:10.037264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.310 [2024-06-11 09:44:10.037280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.310 [2024-06-11 09:44:10.037288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.310 [2024-06-11 09:44:10.037294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.310 [2024-06-11 09:44:10.037309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.310 qpair failed and we were unable to recover it. 00:29:38.310 [2024-06-11 09:44:10.047172] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.310 [2024-06-11 09:44:10.047246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.310 [2024-06-11 09:44:10.047262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.310 [2024-06-11 09:44:10.047269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.310 [2024-06-11 09:44:10.047276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.310 [2024-06-11 09:44:10.047290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.310 qpair failed and we were unable to recover it. 00:29:38.310 [2024-06-11 09:44:10.057235] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.310 [2024-06-11 09:44:10.057324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.310 [2024-06-11 09:44:10.057340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.310 [2024-06-11 09:44:10.057347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.310 [2024-06-11 09:44:10.057353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.310 [2024-06-11 09:44:10.057369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.310 qpair failed and we were unable to recover it. 00:29:38.310 [2024-06-11 09:44:10.067233] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.310 [2024-06-11 09:44:10.067306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.310 [2024-06-11 09:44:10.067326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.310 [2024-06-11 09:44:10.067334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.310 [2024-06-11 09:44:10.067340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.310 [2024-06-11 09:44:10.067355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.310 qpair failed and we were unable to recover it. 00:29:38.310 [2024-06-11 09:44:10.077297] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.310 [2024-06-11 09:44:10.077368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.310 [2024-06-11 09:44:10.077384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.310 [2024-06-11 09:44:10.077391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.310 [2024-06-11 09:44:10.077397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.310 [2024-06-11 09:44:10.077412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.310 qpair failed and we were unable to recover it. 00:29:38.310 [2024-06-11 09:44:10.087204] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.310 [2024-06-11 09:44:10.087270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.310 [2024-06-11 09:44:10.087285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.310 [2024-06-11 09:44:10.087296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.310 [2024-06-11 09:44:10.087302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.310 [2024-06-11 09:44:10.087322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.310 qpair failed and we were unable to recover it. 00:29:38.310 [2024-06-11 09:44:10.097368] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.310 [2024-06-11 09:44:10.097437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.310 [2024-06-11 09:44:10.097452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.310 [2024-06-11 09:44:10.097459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.310 [2024-06-11 09:44:10.097466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.310 [2024-06-11 09:44:10.097482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.310 qpair failed and we were unable to recover it. 00:29:38.310 [2024-06-11 09:44:10.107381] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.310 [2024-06-11 09:44:10.107461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.310 [2024-06-11 09:44:10.107477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.310 [2024-06-11 09:44:10.107485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.310 [2024-06-11 09:44:10.107492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.310 [2024-06-11 09:44:10.107506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.310 qpair failed and we were unable to recover it. 00:29:38.310 [2024-06-11 09:44:10.117365] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.310 [2024-06-11 09:44:10.117432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.310 [2024-06-11 09:44:10.117448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.310 [2024-06-11 09:44:10.117455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.310 [2024-06-11 09:44:10.117461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.310 [2024-06-11 09:44:10.117477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.310 qpair failed and we were unable to recover it. 00:29:38.573 [2024-06-11 09:44:10.127392] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.573 [2024-06-11 09:44:10.127464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.573 [2024-06-11 09:44:10.127479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.573 [2024-06-11 09:44:10.127486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.573 [2024-06-11 09:44:10.127493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.573 [2024-06-11 09:44:10.127508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.573 qpair failed and we were unable to recover it. 00:29:38.573 [2024-06-11 09:44:10.137450] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.573 [2024-06-11 09:44:10.137524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.573 [2024-06-11 09:44:10.137540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.573 [2024-06-11 09:44:10.137547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.573 [2024-06-11 09:44:10.137553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.573 [2024-06-11 09:44:10.137568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.573 qpair failed and we were unable to recover it. 00:29:38.573 [2024-06-11 09:44:10.147468] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.573 [2024-06-11 09:44:10.147558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.573 [2024-06-11 09:44:10.147574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.573 [2024-06-11 09:44:10.147581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.573 [2024-06-11 09:44:10.147587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.573 [2024-06-11 09:44:10.147602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.573 qpair failed and we were unable to recover it. 00:29:38.573 [2024-06-11 09:44:10.157566] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.573 [2024-06-11 09:44:10.157636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.573 [2024-06-11 09:44:10.157651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.573 [2024-06-11 09:44:10.157658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.573 [2024-06-11 09:44:10.157664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.573 [2024-06-11 09:44:10.157679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.573 qpair failed and we were unable to recover it. 00:29:38.573 [2024-06-11 09:44:10.167501] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.573 [2024-06-11 09:44:10.167573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.573 [2024-06-11 09:44:10.167588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.573 [2024-06-11 09:44:10.167596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.573 [2024-06-11 09:44:10.167602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.573 [2024-06-11 09:44:10.167616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.573 qpair failed and we were unable to recover it. 00:29:38.573 [2024-06-11 09:44:10.177555] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.573 [2024-06-11 09:44:10.177625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.573 [2024-06-11 09:44:10.177643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.573 [2024-06-11 09:44:10.177651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.573 [2024-06-11 09:44:10.177657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.573 [2024-06-11 09:44:10.177671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.573 qpair failed and we were unable to recover it. 00:29:38.573 [2024-06-11 09:44:10.187561] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.573 [2024-06-11 09:44:10.187638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.573 [2024-06-11 09:44:10.187653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.573 [2024-06-11 09:44:10.187660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.573 [2024-06-11 09:44:10.187666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.573 [2024-06-11 09:44:10.187681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.573 qpair failed and we were unable to recover it. 00:29:38.574 [2024-06-11 09:44:10.197613] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.574 [2024-06-11 09:44:10.197678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.574 [2024-06-11 09:44:10.197693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.574 [2024-06-11 09:44:10.197701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.574 [2024-06-11 09:44:10.197707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.574 [2024-06-11 09:44:10.197721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.574 qpair failed and we were unable to recover it. 00:29:38.574 [2024-06-11 09:44:10.207522] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.574 [2024-06-11 09:44:10.207620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.574 [2024-06-11 09:44:10.207635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.574 [2024-06-11 09:44:10.207642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.574 [2024-06-11 09:44:10.207649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.574 [2024-06-11 09:44:10.207663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.574 qpair failed and we were unable to recover it. 00:29:38.574 [2024-06-11 09:44:10.217689] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.574 [2024-06-11 09:44:10.217760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.574 [2024-06-11 09:44:10.217776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.574 [2024-06-11 09:44:10.217783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.574 [2024-06-11 09:44:10.217789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.574 [2024-06-11 09:44:10.217808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.574 qpair failed and we were unable to recover it. 00:29:38.574 [2024-06-11 09:44:10.227593] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.574 [2024-06-11 09:44:10.227673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.574 [2024-06-11 09:44:10.227689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.574 [2024-06-11 09:44:10.227697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.574 [2024-06-11 09:44:10.227703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.574 [2024-06-11 09:44:10.227718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.574 qpair failed and we were unable to recover it. 00:29:38.574 [2024-06-11 09:44:10.237712] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.574 [2024-06-11 09:44:10.237786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.574 [2024-06-11 09:44:10.237802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.574 [2024-06-11 09:44:10.237809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.574 [2024-06-11 09:44:10.237816] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.574 [2024-06-11 09:44:10.237831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.574 qpair failed and we were unable to recover it. 00:29:38.574 [2024-06-11 09:44:10.247743] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.574 [2024-06-11 09:44:10.247808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.574 [2024-06-11 09:44:10.247824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.574 [2024-06-11 09:44:10.247831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.574 [2024-06-11 09:44:10.247837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.574 [2024-06-11 09:44:10.247852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.574 qpair failed and we were unable to recover it. 00:29:38.574 [2024-06-11 09:44:10.257744] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.574 [2024-06-11 09:44:10.257818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.574 [2024-06-11 09:44:10.257834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.574 [2024-06-11 09:44:10.257841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.574 [2024-06-11 09:44:10.257847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.574 [2024-06-11 09:44:10.257862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.574 qpair failed and we were unable to recover it. 00:29:38.574 [2024-06-11 09:44:10.267780] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.574 [2024-06-11 09:44:10.267856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.574 [2024-06-11 09:44:10.267875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.574 [2024-06-11 09:44:10.267883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.574 [2024-06-11 09:44:10.267890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.574 [2024-06-11 09:44:10.267904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.574 qpair failed and we were unable to recover it. 00:29:38.574 [2024-06-11 09:44:10.277769] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.574 [2024-06-11 09:44:10.277838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.574 [2024-06-11 09:44:10.277853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.574 [2024-06-11 09:44:10.277861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.574 [2024-06-11 09:44:10.277868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.574 [2024-06-11 09:44:10.277883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.574 qpair failed and we were unable to recover it. 00:29:38.574 [2024-06-11 09:44:10.287854] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.574 [2024-06-11 09:44:10.287925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.574 [2024-06-11 09:44:10.287940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.574 [2024-06-11 09:44:10.287947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.575 [2024-06-11 09:44:10.287954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.575 [2024-06-11 09:44:10.287969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.575 qpair failed and we were unable to recover it. 00:29:38.575 [2024-06-11 09:44:10.297856] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.575 [2024-06-11 09:44:10.297929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.575 [2024-06-11 09:44:10.297945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.575 [2024-06-11 09:44:10.297952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.575 [2024-06-11 09:44:10.297960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.575 [2024-06-11 09:44:10.297974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.575 qpair failed and we were unable to recover it. 00:29:38.575 [2024-06-11 09:44:10.307899] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.575 [2024-06-11 09:44:10.307982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.575 [2024-06-11 09:44:10.308008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.575 [2024-06-11 09:44:10.308018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.575 [2024-06-11 09:44:10.308029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.575 [2024-06-11 09:44:10.308048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.575 qpair failed and we were unable to recover it. 00:29:38.575 [2024-06-11 09:44:10.317935] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.575 [2024-06-11 09:44:10.318013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.575 [2024-06-11 09:44:10.318037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.575 [2024-06-11 09:44:10.318047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.575 [2024-06-11 09:44:10.318054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.575 [2024-06-11 09:44:10.318074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.575 qpair failed and we were unable to recover it. 00:29:38.575 [2024-06-11 09:44:10.327965] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.575 [2024-06-11 09:44:10.328038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.575 [2024-06-11 09:44:10.328063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.575 [2024-06-11 09:44:10.328073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.575 [2024-06-11 09:44:10.328079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.575 [2024-06-11 09:44:10.328098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.575 qpair failed and we were unable to recover it. 00:29:38.575 [2024-06-11 09:44:10.337973] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.575 [2024-06-11 09:44:10.338044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.575 [2024-06-11 09:44:10.338062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.575 [2024-06-11 09:44:10.338069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.575 [2024-06-11 09:44:10.338075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.575 [2024-06-11 09:44:10.338092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.575 qpair failed and we were unable to recover it. 00:29:38.575 [2024-06-11 09:44:10.348008] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.575 [2024-06-11 09:44:10.348089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.575 [2024-06-11 09:44:10.348114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.575 [2024-06-11 09:44:10.348124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.575 [2024-06-11 09:44:10.348131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.575 [2024-06-11 09:44:10.348150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.575 qpair failed and we were unable to recover it. 00:29:38.575 [2024-06-11 09:44:10.358026] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.575 [2024-06-11 09:44:10.358103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.575 [2024-06-11 09:44:10.358121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.575 [2024-06-11 09:44:10.358128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.575 [2024-06-11 09:44:10.358135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.575 [2024-06-11 09:44:10.358152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.575 qpair failed and we were unable to recover it. 00:29:38.575 [2024-06-11 09:44:10.368041] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.575 [2024-06-11 09:44:10.368110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.575 [2024-06-11 09:44:10.368127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.575 [2024-06-11 09:44:10.368134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.575 [2024-06-11 09:44:10.368141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.575 [2024-06-11 09:44:10.368156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.575 qpair failed and we were unable to recover it. 00:29:38.575 [2024-06-11 09:44:10.378078] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.575 [2024-06-11 09:44:10.378173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.575 [2024-06-11 09:44:10.378189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.575 [2024-06-11 09:44:10.378196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.575 [2024-06-11 09:44:10.378202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.575 [2024-06-11 09:44:10.378217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.575 qpair failed and we were unable to recover it. 00:29:38.837 [2024-06-11 09:44:10.388124] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.838 [2024-06-11 09:44:10.388207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.838 [2024-06-11 09:44:10.388223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.838 [2024-06-11 09:44:10.388230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.838 [2024-06-11 09:44:10.388237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.838 [2024-06-11 09:44:10.388252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.838 qpair failed and we were unable to recover it. 00:29:38.838 [2024-06-11 09:44:10.398150] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.838 [2024-06-11 09:44:10.398231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.838 [2024-06-11 09:44:10.398246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.838 [2024-06-11 09:44:10.398254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.838 [2024-06-11 09:44:10.398264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.838 [2024-06-11 09:44:10.398279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.838 qpair failed and we were unable to recover it. 00:29:38.838 [2024-06-11 09:44:10.408145] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.838 [2024-06-11 09:44:10.408211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.838 [2024-06-11 09:44:10.408227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.838 [2024-06-11 09:44:10.408234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.838 [2024-06-11 09:44:10.408240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.838 [2024-06-11 09:44:10.408255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.838 qpair failed and we were unable to recover it. 00:29:38.838 [2024-06-11 09:44:10.418215] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.838 [2024-06-11 09:44:10.418294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.838 [2024-06-11 09:44:10.418310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.838 [2024-06-11 09:44:10.418324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.838 [2024-06-11 09:44:10.418330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.838 [2024-06-11 09:44:10.418346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.838 qpair failed and we were unable to recover it. 00:29:38.838 [2024-06-11 09:44:10.428214] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.838 [2024-06-11 09:44:10.428293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.838 [2024-06-11 09:44:10.428308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.838 [2024-06-11 09:44:10.428321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.838 [2024-06-11 09:44:10.428328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.838 [2024-06-11 09:44:10.428343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.838 qpair failed and we were unable to recover it. 00:29:38.838 [2024-06-11 09:44:10.438245] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.838 [2024-06-11 09:44:10.438345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.838 [2024-06-11 09:44:10.438361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.838 [2024-06-11 09:44:10.438368] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.838 [2024-06-11 09:44:10.438375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.838 [2024-06-11 09:44:10.438389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.838 qpair failed and we were unable to recover it. 00:29:38.838 [2024-06-11 09:44:10.448351] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.838 [2024-06-11 09:44:10.448420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.838 [2024-06-11 09:44:10.448436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.838 [2024-06-11 09:44:10.448443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.838 [2024-06-11 09:44:10.448449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.838 [2024-06-11 09:44:10.448465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.838 qpair failed and we were unable to recover it. 00:29:38.838 [2024-06-11 09:44:10.458371] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.838 [2024-06-11 09:44:10.458489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.838 [2024-06-11 09:44:10.458506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.838 [2024-06-11 09:44:10.458516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.838 [2024-06-11 09:44:10.458524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.838 [2024-06-11 09:44:10.458539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.838 qpair failed and we were unable to recover it. 00:29:38.838 [2024-06-11 09:44:10.468225] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.838 [2024-06-11 09:44:10.468299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.838 [2024-06-11 09:44:10.468321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.838 [2024-06-11 09:44:10.468329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.838 [2024-06-11 09:44:10.468335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.838 [2024-06-11 09:44:10.468350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.838 qpair failed and we were unable to recover it. 00:29:38.838 [2024-06-11 09:44:10.478348] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.838 [2024-06-11 09:44:10.478415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.838 [2024-06-11 09:44:10.478430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.838 [2024-06-11 09:44:10.478437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.838 [2024-06-11 09:44:10.478444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.838 [2024-06-11 09:44:10.478458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.838 qpair failed and we were unable to recover it. 00:29:38.838 [2024-06-11 09:44:10.488406] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.838 [2024-06-11 09:44:10.488474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.838 [2024-06-11 09:44:10.488489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.838 [2024-06-11 09:44:10.488504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.838 [2024-06-11 09:44:10.488511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.838 [2024-06-11 09:44:10.488525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.838 qpair failed and we were unable to recover it. 00:29:38.839 [2024-06-11 09:44:10.498404] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.839 [2024-06-11 09:44:10.498478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.839 [2024-06-11 09:44:10.498493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.839 [2024-06-11 09:44:10.498501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.839 [2024-06-11 09:44:10.498507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.839 [2024-06-11 09:44:10.498522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.839 qpair failed and we were unable to recover it. 00:29:38.839 [2024-06-11 09:44:10.508362] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.839 [2024-06-11 09:44:10.508444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.839 [2024-06-11 09:44:10.508459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.839 [2024-06-11 09:44:10.508466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.839 [2024-06-11 09:44:10.508473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.839 [2024-06-11 09:44:10.508488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.839 qpair failed and we were unable to recover it. 00:29:38.839 [2024-06-11 09:44:10.518462] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.839 [2024-06-11 09:44:10.518535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.839 [2024-06-11 09:44:10.518551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.839 [2024-06-11 09:44:10.518559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.839 [2024-06-11 09:44:10.518566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.839 [2024-06-11 09:44:10.518581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.839 qpair failed and we were unable to recover it. 00:29:38.839 [2024-06-11 09:44:10.528581] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.839 [2024-06-11 09:44:10.528685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.839 [2024-06-11 09:44:10.528701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.839 [2024-06-11 09:44:10.528708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.839 [2024-06-11 09:44:10.528714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.839 [2024-06-11 09:44:10.528729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.839 qpair failed and we were unable to recover it. 00:29:38.839 [2024-06-11 09:44:10.538585] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.839 [2024-06-11 09:44:10.538673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.839 [2024-06-11 09:44:10.538690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.839 [2024-06-11 09:44:10.538697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.839 [2024-06-11 09:44:10.538704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.839 [2024-06-11 09:44:10.538719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.839 qpair failed and we were unable to recover it. 00:29:38.839 [2024-06-11 09:44:10.548558] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.839 [2024-06-11 09:44:10.548629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.839 [2024-06-11 09:44:10.548644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.839 [2024-06-11 09:44:10.548652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.839 [2024-06-11 09:44:10.548658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.839 [2024-06-11 09:44:10.548672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.839 qpair failed and we were unable to recover it. 00:29:38.839 [2024-06-11 09:44:10.558586] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.839 [2024-06-11 09:44:10.558653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.839 [2024-06-11 09:44:10.558668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.839 [2024-06-11 09:44:10.558675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.839 [2024-06-11 09:44:10.558682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.839 [2024-06-11 09:44:10.558697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.839 qpair failed and we were unable to recover it. 00:29:38.839 [2024-06-11 09:44:10.568505] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.839 [2024-06-11 09:44:10.568573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.839 [2024-06-11 09:44:10.568589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.839 [2024-06-11 09:44:10.568596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.839 [2024-06-11 09:44:10.568603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.839 [2024-06-11 09:44:10.568618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.839 qpair failed and we were unable to recover it. 00:29:38.839 [2024-06-11 09:44:10.578634] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.839 [2024-06-11 09:44:10.578704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.839 [2024-06-11 09:44:10.578725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.839 [2024-06-11 09:44:10.578732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.839 [2024-06-11 09:44:10.578739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.839 [2024-06-11 09:44:10.578754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.839 qpair failed and we were unable to recover it. 00:29:38.839 [2024-06-11 09:44:10.588634] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.839 [2024-06-11 09:44:10.588714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.839 [2024-06-11 09:44:10.588729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.839 [2024-06-11 09:44:10.588736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.839 [2024-06-11 09:44:10.588743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.839 [2024-06-11 09:44:10.588757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.839 qpair failed and we were unable to recover it. 00:29:38.839 [2024-06-11 09:44:10.598708] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.839 [2024-06-11 09:44:10.598775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.839 [2024-06-11 09:44:10.598791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.839 [2024-06-11 09:44:10.598799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.839 [2024-06-11 09:44:10.598805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.839 [2024-06-11 09:44:10.598820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.839 qpair failed and we were unable to recover it. 00:29:38.839 [2024-06-11 09:44:10.608734] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.839 [2024-06-11 09:44:10.608809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.839 [2024-06-11 09:44:10.608824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.839 [2024-06-11 09:44:10.608831] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.839 [2024-06-11 09:44:10.608838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.839 [2024-06-11 09:44:10.608852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.839 qpair failed and we were unable to recover it. 00:29:38.839 [2024-06-11 09:44:10.618850] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.840 [2024-06-11 09:44:10.618975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.840 [2024-06-11 09:44:10.618991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.840 [2024-06-11 09:44:10.618999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.840 [2024-06-11 09:44:10.619005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.840 [2024-06-11 09:44:10.619024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.840 qpair failed and we were unable to recover it. 00:29:38.840 [2024-06-11 09:44:10.628808] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.840 [2024-06-11 09:44:10.628884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.840 [2024-06-11 09:44:10.628900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.840 [2024-06-11 09:44:10.628907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.840 [2024-06-11 09:44:10.628913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.840 [2024-06-11 09:44:10.628928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.840 qpair failed and we were unable to recover it. 00:29:38.840 [2024-06-11 09:44:10.638811] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.840 [2024-06-11 09:44:10.638883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.840 [2024-06-11 09:44:10.638899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.840 [2024-06-11 09:44:10.638906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.840 [2024-06-11 09:44:10.638912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.840 [2024-06-11 09:44:10.638927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.840 qpair failed and we were unable to recover it. 00:29:38.840 [2024-06-11 09:44:10.648767] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:38.840 [2024-06-11 09:44:10.648856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:38.840 [2024-06-11 09:44:10.648872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:38.840 [2024-06-11 09:44:10.648879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:38.840 [2024-06-11 09:44:10.648885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:38.840 [2024-06-11 09:44:10.648899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:38.840 qpair failed and we were unable to recover it. 00:29:39.102 [2024-06-11 09:44:10.658960] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.102 [2024-06-11 09:44:10.659029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.102 [2024-06-11 09:44:10.659044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.102 [2024-06-11 09:44:10.659051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.102 [2024-06-11 09:44:10.659058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.102 [2024-06-11 09:44:10.659072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-06-11 09:44:10.669008] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.103 [2024-06-11 09:44:10.669086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.103 [2024-06-11 09:44:10.669106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.103 [2024-06-11 09:44:10.669113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.103 [2024-06-11 09:44:10.669120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.103 [2024-06-11 09:44:10.669134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-06-11 09:44:10.678956] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.103 [2024-06-11 09:44:10.679029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.103 [2024-06-11 09:44:10.679045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.103 [2024-06-11 09:44:10.679052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.103 [2024-06-11 09:44:10.679059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.103 [2024-06-11 09:44:10.679076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-06-11 09:44:10.688958] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.103 [2024-06-11 09:44:10.689033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.103 [2024-06-11 09:44:10.689058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.103 [2024-06-11 09:44:10.689068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.103 [2024-06-11 09:44:10.689075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.103 [2024-06-11 09:44:10.689094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-06-11 09:44:10.698997] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.103 [2024-06-11 09:44:10.699091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.103 [2024-06-11 09:44:10.699117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.103 [2024-06-11 09:44:10.699126] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.103 [2024-06-11 09:44:10.699133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.103 [2024-06-11 09:44:10.699153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-06-11 09:44:10.709018] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.103 [2024-06-11 09:44:10.709089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.103 [2024-06-11 09:44:10.709106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.103 [2024-06-11 09:44:10.709113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.103 [2024-06-11 09:44:10.709120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.103 [2024-06-11 09:44:10.709140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-06-11 09:44:10.719037] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.103 [2024-06-11 09:44:10.719113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.103 [2024-06-11 09:44:10.719130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.103 [2024-06-11 09:44:10.719137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.103 [2024-06-11 09:44:10.719144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.103 [2024-06-11 09:44:10.719159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-06-11 09:44:10.729077] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.103 [2024-06-11 09:44:10.729144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.103 [2024-06-11 09:44:10.729160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.103 [2024-06-11 09:44:10.729167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.103 [2024-06-11 09:44:10.729173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.103 [2024-06-11 09:44:10.729188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-06-11 09:44:10.739126] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.103 [2024-06-11 09:44:10.739197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.103 [2024-06-11 09:44:10.739213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.103 [2024-06-11 09:44:10.739220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.103 [2024-06-11 09:44:10.739226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.103 [2024-06-11 09:44:10.739241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-06-11 09:44:10.749139] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.103 [2024-06-11 09:44:10.749218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.103 [2024-06-11 09:44:10.749234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.103 [2024-06-11 09:44:10.749241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.103 [2024-06-11 09:44:10.749248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.103 [2024-06-11 09:44:10.749262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-06-11 09:44:10.759165] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.103 [2024-06-11 09:44:10.759242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.103 [2024-06-11 09:44:10.759257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.103 [2024-06-11 09:44:10.759264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.103 [2024-06-11 09:44:10.759270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.103 [2024-06-11 09:44:10.759286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-06-11 09:44:10.769196] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.103 [2024-06-11 09:44:10.769265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.103 [2024-06-11 09:44:10.769281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.103 [2024-06-11 09:44:10.769288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.103 [2024-06-11 09:44:10.769294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.103 [2024-06-11 09:44:10.769310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.103 qpair failed and we were unable to recover it. 00:29:39.103 [2024-06-11 09:44:10.779263] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.103 [2024-06-11 09:44:10.779353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.103 [2024-06-11 09:44:10.779368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.103 [2024-06-11 09:44:10.779375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.103 [2024-06-11 09:44:10.779382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.104 [2024-06-11 09:44:10.779396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-06-11 09:44:10.789256] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.104 [2024-06-11 09:44:10.789357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.104 [2024-06-11 09:44:10.789373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.104 [2024-06-11 09:44:10.789380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.104 [2024-06-11 09:44:10.789387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.104 [2024-06-11 09:44:10.789401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-06-11 09:44:10.799270] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.104 [2024-06-11 09:44:10.799375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.104 [2024-06-11 09:44:10.799391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.104 [2024-06-11 09:44:10.799399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.104 [2024-06-11 09:44:10.799409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.104 [2024-06-11 09:44:10.799424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-06-11 09:44:10.809311] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.104 [2024-06-11 09:44:10.809392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.104 [2024-06-11 09:44:10.809407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.104 [2024-06-11 09:44:10.809415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.104 [2024-06-11 09:44:10.809421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.104 [2024-06-11 09:44:10.809436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-06-11 09:44:10.819329] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.104 [2024-06-11 09:44:10.819405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.104 [2024-06-11 09:44:10.819421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.104 [2024-06-11 09:44:10.819428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.104 [2024-06-11 09:44:10.819435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.104 [2024-06-11 09:44:10.819450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-06-11 09:44:10.829319] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.104 [2024-06-11 09:44:10.829404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.104 [2024-06-11 09:44:10.829420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.104 [2024-06-11 09:44:10.829429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.104 [2024-06-11 09:44:10.829435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.104 [2024-06-11 09:44:10.829450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-06-11 09:44:10.839375] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.104 [2024-06-11 09:44:10.839451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.104 [2024-06-11 09:44:10.839467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.104 [2024-06-11 09:44:10.839474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.104 [2024-06-11 09:44:10.839480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.104 [2024-06-11 09:44:10.839495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-06-11 09:44:10.849395] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.104 [2024-06-11 09:44:10.849468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.104 [2024-06-11 09:44:10.849483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.104 [2024-06-11 09:44:10.849490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.104 [2024-06-11 09:44:10.849497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.104 [2024-06-11 09:44:10.849511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-06-11 09:44:10.859456] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.104 [2024-06-11 09:44:10.859527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.104 [2024-06-11 09:44:10.859542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.104 [2024-06-11 09:44:10.859550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.104 [2024-06-11 09:44:10.859556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.104 [2024-06-11 09:44:10.859571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-06-11 09:44:10.869449] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.104 [2024-06-11 09:44:10.869522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.104 [2024-06-11 09:44:10.869539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.104 [2024-06-11 09:44:10.869546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.104 [2024-06-11 09:44:10.869552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.104 [2024-06-11 09:44:10.869567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-06-11 09:44:10.879495] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.104 [2024-06-11 09:44:10.879562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.104 [2024-06-11 09:44:10.879578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.104 [2024-06-11 09:44:10.879585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.104 [2024-06-11 09:44:10.879592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.104 [2024-06-11 09:44:10.879606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-06-11 09:44:10.889514] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.104 [2024-06-11 09:44:10.889615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.104 [2024-06-11 09:44:10.889631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.104 [2024-06-11 09:44:10.889642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.104 [2024-06-11 09:44:10.889648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.104 [2024-06-11 09:44:10.889662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-06-11 09:44:10.899543] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.104 [2024-06-11 09:44:10.899616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.104 [2024-06-11 09:44:10.899632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.104 [2024-06-11 09:44:10.899640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.104 [2024-06-11 09:44:10.899646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.104 [2024-06-11 09:44:10.899661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.104 [2024-06-11 09:44:10.909530] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.104 [2024-06-11 09:44:10.909654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.104 [2024-06-11 09:44:10.909670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.104 [2024-06-11 09:44:10.909677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.104 [2024-06-11 09:44:10.909684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.104 [2024-06-11 09:44:10.909698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.104 qpair failed and we were unable to recover it. 00:29:39.367 [2024-06-11 09:44:10.919594] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.368 [2024-06-11 09:44:10.919664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.368 [2024-06-11 09:44:10.919680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.368 [2024-06-11 09:44:10.919687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.368 [2024-06-11 09:44:10.919693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.368 [2024-06-11 09:44:10.919709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.368 qpair failed and we were unable to recover it. 00:29:39.368 [2024-06-11 09:44:10.929632] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.368 [2024-06-11 09:44:10.929700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.368 [2024-06-11 09:44:10.929716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.368 [2024-06-11 09:44:10.929723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.368 [2024-06-11 09:44:10.929730] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.368 [2024-06-11 09:44:10.929744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.368 qpair failed and we were unable to recover it. 00:29:39.368 [2024-06-11 09:44:10.939636] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.368 [2024-06-11 09:44:10.939709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.368 [2024-06-11 09:44:10.939724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.368 [2024-06-11 09:44:10.939732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.368 [2024-06-11 09:44:10.939738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.368 [2024-06-11 09:44:10.939752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.368 qpair failed and we were unable to recover it. 00:29:39.368 [2024-06-11 09:44:10.949684] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.368 [2024-06-11 09:44:10.949758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.368 [2024-06-11 09:44:10.949774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.368 [2024-06-11 09:44:10.949781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.368 [2024-06-11 09:44:10.949788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.368 [2024-06-11 09:44:10.949803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.368 qpair failed and we were unable to recover it. 00:29:39.368 [2024-06-11 09:44:10.959657] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.368 [2024-06-11 09:44:10.959724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.368 [2024-06-11 09:44:10.959739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.368 [2024-06-11 09:44:10.959746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.368 [2024-06-11 09:44:10.959752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.368 [2024-06-11 09:44:10.959766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.368 qpair failed and we were unable to recover it. 00:29:39.368 [2024-06-11 09:44:10.969764] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.368 [2024-06-11 09:44:10.969833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.368 [2024-06-11 09:44:10.969849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.368 [2024-06-11 09:44:10.969856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.368 [2024-06-11 09:44:10.969862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.368 [2024-06-11 09:44:10.969878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.368 qpair failed and we were unable to recover it. 00:29:39.368 [2024-06-11 09:44:10.979777] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.368 [2024-06-11 09:44:10.979846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.368 [2024-06-11 09:44:10.979862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.368 [2024-06-11 09:44:10.979872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.368 [2024-06-11 09:44:10.979879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.368 [2024-06-11 09:44:10.979894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.368 qpair failed and we were unable to recover it. 00:29:39.368 [2024-06-11 09:44:10.989787] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.368 [2024-06-11 09:44:10.989863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.368 [2024-06-11 09:44:10.989879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.368 [2024-06-11 09:44:10.989886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.368 [2024-06-11 09:44:10.989892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.368 [2024-06-11 09:44:10.989906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.368 qpair failed and we were unable to recover it. 00:29:39.368 [2024-06-11 09:44:10.999829] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.368 [2024-06-11 09:44:10.999896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.368 [2024-06-11 09:44:10.999912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.368 [2024-06-11 09:44:10.999919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.368 [2024-06-11 09:44:10.999925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.368 [2024-06-11 09:44:10.999940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.368 qpair failed and we were unable to recover it. 00:29:39.368 [2024-06-11 09:44:11.009869] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.368 [2024-06-11 09:44:11.009935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.368 [2024-06-11 09:44:11.009951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.368 [2024-06-11 09:44:11.009958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.368 [2024-06-11 09:44:11.009965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.368 [2024-06-11 09:44:11.009979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.368 qpair failed and we were unable to recover it. 00:29:39.368 [2024-06-11 09:44:11.020043] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.368 [2024-06-11 09:44:11.020114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.368 [2024-06-11 09:44:11.020131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.368 [2024-06-11 09:44:11.020138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.368 [2024-06-11 09:44:11.020144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.368 [2024-06-11 09:44:11.020159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.368 qpair failed and we were unable to recover it. 00:29:39.368 [2024-06-11 09:44:11.029919] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.368 [2024-06-11 09:44:11.030045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.368 [2024-06-11 09:44:11.030063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.368 [2024-06-11 09:44:11.030071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.368 [2024-06-11 09:44:11.030077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.368 [2024-06-11 09:44:11.030092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.368 qpair failed and we were unable to recover it. 00:29:39.368 [2024-06-11 09:44:11.039872] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.368 [2024-06-11 09:44:11.039953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.368 [2024-06-11 09:44:11.039978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.368 [2024-06-11 09:44:11.039987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.368 [2024-06-11 09:44:11.039995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.368 [2024-06-11 09:44:11.040014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.368 qpair failed and we were unable to recover it. 00:29:39.368 [2024-06-11 09:44:11.049954] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.368 [2024-06-11 09:44:11.050027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.368 [2024-06-11 09:44:11.050052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.369 [2024-06-11 09:44:11.050061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.369 [2024-06-11 09:44:11.050068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.369 [2024-06-11 09:44:11.050088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.369 qpair failed and we were unable to recover it. 00:29:39.369 [2024-06-11 09:44:11.059996] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.369 [2024-06-11 09:44:11.060077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.369 [2024-06-11 09:44:11.060102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.369 [2024-06-11 09:44:11.060111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.369 [2024-06-11 09:44:11.060118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.369 [2024-06-11 09:44:11.060137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.369 qpair failed and we were unable to recover it. 00:29:39.369 [2024-06-11 09:44:11.070023] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.369 [2024-06-11 09:44:11.070099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.369 [2024-06-11 09:44:11.070121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.369 [2024-06-11 09:44:11.070129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.369 [2024-06-11 09:44:11.070135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.369 [2024-06-11 09:44:11.070151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.369 qpair failed and we were unable to recover it. 00:29:39.369 [2024-06-11 09:44:11.080056] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.369 [2024-06-11 09:44:11.080121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.369 [2024-06-11 09:44:11.080138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.369 [2024-06-11 09:44:11.080145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.369 [2024-06-11 09:44:11.080151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42a4000b90 00:29:39.369 [2024-06-11 09:44:11.080167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:39.369 qpair failed and we were unable to recover it. 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 [2024-06-11 09:44:11.080580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:39.369 [2024-06-11 09:44:11.090148] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.369 [2024-06-11 09:44:11.090263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.369 [2024-06-11 09:44:11.090283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.369 [2024-06-11 09:44:11.090295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.369 [2024-06-11 09:44:11.090302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42ac000b90 00:29:39.369 [2024-06-11 09:44:11.090323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:39.369 qpair failed and we were unable to recover it. 00:29:39.369 [2024-06-11 09:44:11.100094] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.369 [2024-06-11 09:44:11.100164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.369 [2024-06-11 09:44:11.100182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.369 [2024-06-11 09:44:11.100189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.369 [2024-06-11 09:44:11.100196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42ac000b90 00:29:39.369 [2024-06-11 09:44:11.100213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:39.369 qpair failed and we were unable to recover it. 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Read completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 Write completed with error (sct=0, sc=8) 00:29:39.369 starting I/O failed 00:29:39.369 [2024-06-11 09:44:11.101082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.369 [2024-06-11 09:44:11.110157] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.370 [2024-06-11 09:44:11.110343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.370 [2024-06-11 09:44:11.110409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.370 [2024-06-11 09:44:11.110435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.370 [2024-06-11 09:44:11.110465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfde270 00:29:39.370 [2024-06-11 09:44:11.110516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.370 qpair failed and we were unable to recover it. 00:29:39.370 [2024-06-11 09:44:11.120189] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.370 [2024-06-11 09:44:11.120301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.370 [2024-06-11 09:44:11.120345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.370 [2024-06-11 09:44:11.120360] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.370 [2024-06-11 09:44:11.120374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfde270 00:29:39.370 [2024-06-11 09:44:11.120404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:39.370 qpair failed and we were unable to recover it. 00:29:39.370 [2024-06-11 09:44:11.120781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfebe30 is same with the state(5) to be set 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Read completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 Write completed with error (sct=0, sc=8) 00:29:39.370 starting I/O failed 00:29:39.370 [2024-06-11 09:44:11.121160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.370 [2024-06-11 09:44:11.130176] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.370 [2024-06-11 09:44:11.130246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.370 [2024-06-11 09:44:11.130265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.370 [2024-06-11 09:44:11.130271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.370 [2024-06-11 09:44:11.130280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42b0000b90 00:29:39.370 [2024-06-11 09:44:11.130294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.370 qpair failed and we were unable to recover it. 00:29:39.370 [2024-06-11 09:44:11.140153] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:39.370 [2024-06-11 09:44:11.140235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:39.370 [2024-06-11 09:44:11.140254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:39.370 [2024-06-11 09:44:11.140261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:39.370 [2024-06-11 09:44:11.140265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f42b0000b90 00:29:39.370 [2024-06-11 09:44:11.140279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:39.370 qpair failed and we were unable to recover it. 00:29:39.370 [2024-06-11 09:44:11.140650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfebe30 (9): Bad file descriptor 00:29:39.370 Initializing NVMe Controllers 00:29:39.370 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:39.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:39.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:39.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:39.370 Initialization complete. Launching workers. 00:29:39.370 Starting thread on core 1 00:29:39.370 Starting thread on core 2 00:29:39.370 Starting thread on core 3 00:29:39.370 Starting thread on core 0 00:29:39.370 09:44:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:39.370 00:29:39.370 real 0m11.503s 00:29:39.370 user 0m21.400s 00:29:39.370 sys 0m3.962s 00:29:39.370 09:44:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:39.370 09:44:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:39.370 ************************************ 00:29:39.370 END TEST nvmf_target_disconnect_tc2 00:29:39.370 ************************************ 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:39.631 rmmod nvme_tcp 00:29:39.631 rmmod nvme_fabrics 00:29:39.631 rmmod nvme_keyring 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1332620 ']' 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1332620 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 1332620 ']' 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 1332620 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1332620 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1332620' 00:29:39.631 killing process with pid 1332620 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 1332620 00:29:39.631 09:44:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 1332620 00:29:39.892 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:39.892 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:39.892 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:39.892 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:39.892 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:39.892 09:44:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.892 09:44:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:39.892 09:44:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.807 09:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:41.807 00:29:41.807 real 0m21.517s 00:29:41.807 user 0m49.228s 00:29:41.807 sys 0m9.829s 00:29:41.807 09:44:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:41.807 09:44:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:41.807 ************************************ 00:29:41.807 END TEST nvmf_target_disconnect 00:29:41.807 ************************************ 00:29:41.807 09:44:13 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:29:41.807 09:44:13 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:41.807 09:44:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:41.807 09:44:13 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:41.807 00:29:41.807 real 22m51.326s 00:29:41.807 user 49m2.934s 00:29:41.807 sys 7m6.755s 00:29:41.807 09:44:13 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:41.807 09:44:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:41.807 ************************************ 00:29:41.807 END TEST nvmf_tcp 00:29:41.807 ************************************ 00:29:42.069 09:44:13 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:42.069 09:44:13 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:42.069 09:44:13 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:42.069 09:44:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:42.069 09:44:13 -- common/autotest_common.sh@10 -- # set +x 00:29:42.069 ************************************ 00:29:42.069 START TEST spdkcli_nvmf_tcp 00:29:42.069 ************************************ 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:42.069 * Looking for test storage... 00:29:42.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:42.069 09:44:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1334450 00:29:42.070 09:44:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1334450 00:29:42.070 09:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 1334450 ']' 00:29:42.070 09:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.070 09:44:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:42.070 09:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:42.070 09:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.070 09:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:42.070 09:44:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:42.070 [2024-06-11 09:44:13.881659] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:29:42.070 [2024-06-11 09:44:13.881731] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334450 ] 00:29:42.331 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.331 [2024-06-11 09:44:13.963241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:42.331 [2024-06-11 09:44:14.059351] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.331 [2024-06-11 09:44:14.059359] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.274 09:44:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:43.274 09:44:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:29:43.274 09:44:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:43.274 09:44:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:43.274 09:44:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:43.274 09:44:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:43.274 09:44:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:43.274 09:44:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:43.274 09:44:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:43.274 09:44:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:43.274 09:44:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:43.274 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:43.274 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:43.274 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:43.274 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:43.274 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:43.274 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:43.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:43.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:43.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:43.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:43.274 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:43.274 ' 00:29:45.819 [2024-06-11 09:44:17.171118] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.761 [2024-06-11 09:44:18.334921] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:48.674 [2024-06-11 09:44:20.473331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:50.586 [2024-06-11 09:44:22.306761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:51.969 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:51.969 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:51.969 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:51.969 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:51.969 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:51.969 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:51.969 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:51.969 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:51.969 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:51.969 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:51.969 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:51.970 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:51.970 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:52.230 09:44:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:52.230 09:44:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:52.230 09:44:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.230 09:44:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:52.230 09:44:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:52.230 09:44:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.230 09:44:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:52.230 09:44:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:52.491 09:44:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:52.491 09:44:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:52.491 09:44:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:52.491 09:44:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:52.491 09:44:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.751 09:44:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:52.751 09:44:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:52.751 09:44:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.751 09:44:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:52.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:52.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:52.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:52.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:52.751 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:52.751 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:52.751 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:52.751 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:52.751 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:52.751 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:52.751 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:52.751 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:52.751 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:52.751 ' 00:29:58.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:58.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:58.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:58.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:58.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:58.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:58.098 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:58.098 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:58.098 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:58.098 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:58.098 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:58.098 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:58.098 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:58.098 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1334450 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 1334450 ']' 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 1334450 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1334450 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1334450' 00:29:58.098 killing process with pid 1334450 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 1334450 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 1334450 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1334450 ']' 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1334450 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 1334450 ']' 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 1334450 00:29:58.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1334450) - No such process 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 1334450 is not found' 00:29:58.098 Process with pid 1334450 is not found 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:58.098 00:29:58.098 real 0m15.784s 00:29:58.098 user 0m32.620s 00:29:58.098 sys 0m0.725s 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:58.098 09:44:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:58.098 ************************************ 00:29:58.098 END TEST spdkcli_nvmf_tcp 00:29:58.098 ************************************ 00:29:58.098 09:44:29 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:58.098 09:44:29 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:58.098 09:44:29 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:58.098 09:44:29 -- common/autotest_common.sh@10 -- # set +x 00:29:58.098 ************************************ 00:29:58.098 START TEST nvmf_identify_passthru 00:29:58.098 ************************************ 00:29:58.098 09:44:29 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:58.098 * Looking for test storage... 00:29:58.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:58.098 09:44:29 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.098 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.098 09:44:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.098 09:44:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.098 09:44:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.098 09:44:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.099 09:44:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.099 09:44:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.099 09:44:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:58.099 09:44:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:58.099 09:44:29 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.099 09:44:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.099 09:44:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.099 09:44:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.099 09:44:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.099 09:44:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.099 09:44:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.099 09:44:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:58.099 09:44:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.099 09:44:29 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.099 09:44:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:58.099 09:44:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:58.099 09:44:29 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:58.099 09:44:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.693 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:04.954 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:04.954 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:04.954 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:04.954 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.954 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.215 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:05.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:30:05.215 00:30:05.215 --- 10.0.0.2 ping statistics --- 00:30:05.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.215 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:30:05.215 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:30:05.215 00:30:05.215 --- 10.0.0.1 ping statistics --- 00:30:05.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.215 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:30:05.215 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.215 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:05.215 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:05.215 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.215 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:05.215 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:05.215 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.215 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:05.215 09:44:36 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:05.215 09:44:36 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:05.215 09:44:36 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:05.215 09:44:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:05.215 09:44:36 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:05.215 09:44:36 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:30:05.215 09:44:36 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:30:05.215 09:44:36 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:30:05.215 09:44:36 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:30:05.215 09:44:36 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:30:05.215 09:44:36 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:30:05.215 09:44:36 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:05.215 09:44:36 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:05.215 09:44:36 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:30:05.215 09:44:36 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:30:05.215 09:44:36 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:30:05.215 09:44:36 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:65:00.0 00:30:05.215 09:44:36 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:05.215 09:44:36 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:05.215 09:44:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:05.215 09:44:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:05.215 09:44:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:05.215 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.787 09:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:30:05.787 09:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:05.787 09:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:05.787 09:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:05.787 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.359 09:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:06.359 09:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:06.359 09:44:37 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:06.359 09:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:06.359 09:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:06.359 09:44:37 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:06.359 09:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:06.359 09:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1341514 00:30:06.359 09:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:06.359 09:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:06.359 09:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1341514 00:30:06.359 09:44:37 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 1341514 ']' 00:30:06.359 09:44:37 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.359 09:44:37 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:06.359 09:44:37 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.359 09:44:37 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:06.359 09:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:06.359 [2024-06-11 09:44:38.002196] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:30:06.359 [2024-06-11 09:44:38.002262] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.359 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.359 [2024-06-11 09:44:38.090273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.620 [2024-06-11 09:44:38.185713] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.620 [2024-06-11 09:44:38.185768] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.620 [2024-06-11 09:44:38.185776] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.620 [2024-06-11 09:44:38.185788] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.620 [2024-06-11 09:44:38.185794] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.620 [2024-06-11 09:44:38.185928] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.620 [2024-06-11 09:44:38.186060] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.620 [2024-06-11 09:44:38.186229] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.620 [2024-06-11 09:44:38.186230] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:07.192 09:44:38 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:07.192 09:44:38 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:30:07.192 09:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:07.192 09:44:38 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.192 09:44:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.192 INFO: Log level set to 20 00:30:07.192 INFO: Requests: 00:30:07.192 { 00:30:07.192 "jsonrpc": "2.0", 00:30:07.192 "method": "nvmf_set_config", 00:30:07.192 "id": 1, 00:30:07.192 "params": { 00:30:07.192 "admin_cmd_passthru": { 00:30:07.192 "identify_ctrlr": true 00:30:07.192 } 00:30:07.192 } 00:30:07.192 } 00:30:07.192 00:30:07.192 INFO: response: 00:30:07.192 { 00:30:07.192 "jsonrpc": "2.0", 00:30:07.192 "id": 1, 00:30:07.192 "result": true 00:30:07.192 } 00:30:07.192 00:30:07.192 09:44:38 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.192 09:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:07.192 09:44:38 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.192 09:44:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.192 INFO: Setting log level to 20 00:30:07.192 INFO: Setting log level to 20 00:30:07.192 INFO: Log level set to 20 00:30:07.192 INFO: Log level set to 20 00:30:07.192 INFO: Requests: 00:30:07.192 { 00:30:07.192 "jsonrpc": "2.0", 00:30:07.192 "method": "framework_start_init", 00:30:07.192 "id": 1 00:30:07.192 } 00:30:07.192 00:30:07.192 INFO: Requests: 00:30:07.192 { 00:30:07.192 "jsonrpc": "2.0", 00:30:07.192 "method": "framework_start_init", 00:30:07.192 "id": 1 00:30:07.192 } 00:30:07.192 00:30:07.192 [2024-06-11 09:44:38.966733] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:07.192 INFO: response: 00:30:07.192 { 00:30:07.192 "jsonrpc": "2.0", 00:30:07.192 "id": 1, 00:30:07.192 "result": true 00:30:07.192 } 00:30:07.192 00:30:07.192 INFO: response: 00:30:07.192 { 00:30:07.192 "jsonrpc": "2.0", 00:30:07.192 "id": 1, 00:30:07.192 "result": true 00:30:07.192 } 00:30:07.192 00:30:07.192 09:44:38 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.192 09:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:07.192 09:44:38 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.192 09:44:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.192 INFO: Setting log level to 40 00:30:07.192 INFO: Setting log level to 40 00:30:07.192 INFO: Setting log level to 40 00:30:07.192 [2024-06-11 09:44:38.979975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.192 09:44:38 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.192 09:44:38 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:07.192 09:44:38 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:07.192 09:44:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.452 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:07.452 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.452 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.713 Nvme0n1 00:30:07.713 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.713 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:07.713 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.713 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.713 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.713 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:07.713 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.713 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.713 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.713 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:07.713 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.713 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.713 [2024-06-11 09:44:39.365188] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.713 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.713 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:07.713 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.713 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.713 [ 00:30:07.713 { 00:30:07.713 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:07.713 "subtype": "Discovery", 00:30:07.713 "listen_addresses": [], 00:30:07.713 "allow_any_host": true, 00:30:07.713 "hosts": [] 00:30:07.713 }, 00:30:07.713 { 00:30:07.713 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:07.713 "subtype": "NVMe", 00:30:07.713 "listen_addresses": [ 00:30:07.713 { 00:30:07.713 "trtype": "TCP", 00:30:07.713 "adrfam": "IPv4", 00:30:07.713 "traddr": "10.0.0.2", 00:30:07.713 "trsvcid": "4420" 00:30:07.713 } 00:30:07.713 ], 00:30:07.713 "allow_any_host": true, 00:30:07.713 "hosts": [], 00:30:07.713 "serial_number": "SPDK00000000000001", 00:30:07.713 "model_number": "SPDK bdev Controller", 00:30:07.713 "max_namespaces": 1, 00:30:07.713 "min_cntlid": 1, 00:30:07.713 "max_cntlid": 65519, 00:30:07.713 "namespaces": [ 00:30:07.713 { 00:30:07.713 "nsid": 1, 00:30:07.713 "bdev_name": "Nvme0n1", 00:30:07.713 "name": "Nvme0n1", 00:30:07.713 "nguid": "36344730526054870025384500000040", 00:30:07.713 "uuid": "36344730-5260-5487-0025-384500000040" 00:30:07.713 } 00:30:07.713 ] 00:30:07.713 } 00:30:07.713 ] 00:30:07.713 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.713 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:07.713 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:07.713 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:07.713 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.713 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:30:07.713 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:07.713 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:07.713 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:07.974 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.974 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:07.974 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:30:07.974 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:07.974 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:07.974 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.975 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:07.975 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.975 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:07.975 09:44:39 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:07.975 09:44:39 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:07.975 09:44:39 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:07.975 09:44:39 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:07.975 09:44:39 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:07.975 09:44:39 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:07.975 09:44:39 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:07.975 rmmod nvme_tcp 00:30:07.975 rmmod nvme_fabrics 00:30:07.975 rmmod nvme_keyring 00:30:07.975 09:44:39 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:07.975 09:44:39 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:07.975 09:44:39 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:07.975 09:44:39 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1341514 ']' 00:30:07.975 09:44:39 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1341514 00:30:07.975 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 1341514 ']' 00:30:07.975 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 1341514 00:30:07.975 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:30:07.975 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:07.975 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1341514 00:30:08.235 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:08.235 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:08.235 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1341514' 00:30:08.235 killing process with pid 1341514 00:30:08.235 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 1341514 00:30:08.235 09:44:39 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 1341514 00:30:08.496 09:44:40 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:08.496 09:44:40 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:08.496 09:44:40 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:08.496 09:44:40 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:08.496 09:44:40 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:08.496 09:44:40 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.496 09:44:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:08.496 09:44:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.417 09:44:42 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:10.417 00:30:10.417 real 0m12.582s 00:30:10.417 user 0m9.954s 00:30:10.417 sys 0m6.047s 00:30:10.417 09:44:42 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:10.417 09:44:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:10.417 ************************************ 00:30:10.417 END TEST nvmf_identify_passthru 00:30:10.417 ************************************ 00:30:10.417 09:44:42 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:10.417 09:44:42 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:10.417 09:44:42 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:10.417 09:44:42 -- common/autotest_common.sh@10 -- # set +x 00:30:10.417 ************************************ 00:30:10.417 START TEST nvmf_dif 00:30:10.417 ************************************ 00:30:10.417 09:44:42 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:10.677 * Looking for test storage... 00:30:10.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:10.677 09:44:42 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.677 09:44:42 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.677 09:44:42 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.677 09:44:42 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.677 09:44:42 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.677 09:44:42 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.677 09:44:42 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.678 09:44:42 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.678 09:44:42 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:10.678 09:44:42 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:10.678 09:44:42 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:10.678 09:44:42 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:10.678 09:44:42 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:10.678 09:44:42 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:10.678 09:44:42 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.678 09:44:42 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:10.678 09:44:42 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:10.678 09:44:42 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:10.678 09:44:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:17.267 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:17.267 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:17.267 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:17.267 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:17.267 09:44:48 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.268 09:44:48 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.268 09:44:48 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:17.268 09:44:48 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.268 09:44:48 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.268 09:44:48 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:17.268 09:44:48 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:17.268 09:44:48 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.268 09:44:48 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.528 09:44:49 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.528 09:44:49 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.528 09:44:49 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:17.528 09:44:49 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.528 09:44:49 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.528 09:44:49 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.528 09:44:49 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:17.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:30:17.528 00:30:17.528 --- 10.0.0.2 ping statistics --- 00:30:17.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.528 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:30:17.528 09:44:49 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:30:17.528 00:30:17.528 --- 10.0.0.1 ping statistics --- 00:30:17.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.528 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:30:17.528 09:44:49 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.528 09:44:49 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:17.528 09:44:49 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:17.528 09:44:49 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:20.832 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:20.832 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:20.832 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:21.093 09:44:52 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:21.093 09:44:52 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:21.093 09:44:52 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:21.093 09:44:52 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:21.093 09:44:52 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:21.093 09:44:52 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:21.093 09:44:52 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:21.093 09:44:52 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:21.093 09:44:52 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:21.093 09:44:52 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:21.093 09:44:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:21.093 09:44:52 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1347363 00:30:21.093 09:44:52 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1347363 00:30:21.093 09:44:52 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:21.093 09:44:52 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 1347363 ']' 00:30:21.093 09:44:52 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.093 09:44:52 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:21.093 09:44:52 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.093 09:44:52 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:21.093 09:44:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:21.093 [2024-06-11 09:44:52.796416] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:30:21.093 [2024-06-11 09:44:52.796475] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.093 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.093 [2024-06-11 09:44:52.883401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.354 [2024-06-11 09:44:52.978698] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.354 [2024-06-11 09:44:52.978756] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.354 [2024-06-11 09:44:52.978764] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.354 [2024-06-11 09:44:52.978770] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.354 [2024-06-11 09:44:52.978776] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.354 [2024-06-11 09:44:52.978811] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.926 09:44:53 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:21.926 09:44:53 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:30:21.926 09:44:53 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:21.926 09:44:53 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:21.926 09:44:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:21.926 09:44:53 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:21.926 09:44:53 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:21.926 09:44:53 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:21.926 09:44:53 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.926 09:44:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:21.926 [2024-06-11 09:44:53.730518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.926 09:44:53 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.926 09:44:53 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:21.926 09:44:53 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:21.926 09:44:53 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:21.926 09:44:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:22.187 ************************************ 00:30:22.187 START TEST fio_dif_1_default 00:30:22.187 ************************************ 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:22.187 bdev_null0 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.187 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:22.188 [2024-06-11 09:44:53.818943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:22.188 { 00:30:22.188 "params": { 00:30:22.188 "name": "Nvme$subsystem", 00:30:22.188 "trtype": "$TEST_TRANSPORT", 00:30:22.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:22.188 "adrfam": "ipv4", 00:30:22.188 "trsvcid": "$NVMF_PORT", 00:30:22.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:22.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:22.188 "hdgst": ${hdgst:-false}, 00:30:22.188 "ddgst": ${ddgst:-false} 00:30:22.188 }, 00:30:22.188 "method": "bdev_nvme_attach_controller" 00:30:22.188 } 00:30:22.188 EOF 00:30:22.188 )") 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:22.188 "params": { 00:30:22.188 "name": "Nvme0", 00:30:22.188 "trtype": "tcp", 00:30:22.188 "traddr": "10.0.0.2", 00:30:22.188 "adrfam": "ipv4", 00:30:22.188 "trsvcid": "4420", 00:30:22.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:22.188 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:22.188 "hdgst": false, 00:30:22.188 "ddgst": false 00:30:22.188 }, 00:30:22.188 "method": "bdev_nvme_attach_controller" 00:30:22.188 }' 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:22.188 09:44:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:22.449 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:22.449 fio-3.35 00:30:22.449 Starting 1 thread 00:30:22.449 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.744 00:30:34.744 filename0: (groupid=0, jobs=1): err= 0: pid=1347890: Tue Jun 11 09:45:04 2024 00:30:34.744 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10032msec) 00:30:34.744 slat (nsec): min=8195, max=79043, avg=8464.85, stdev=2603.34 00:30:34.744 clat (usec): min=41135, max=43380, avg=41948.97, stdev=200.32 00:30:34.744 lat (usec): min=41143, max=43424, avg=41957.44, stdev=200.64 00:30:34.744 clat percentiles (usec): 00:30:34.744 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:30:34.744 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:34.744 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:34.744 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:30:34.744 | 99.99th=[43254] 00:30:34.744 bw ( KiB/s): min= 352, max= 384, per=99.69%, avg=380.80, stdev= 9.85, samples=20 00:30:34.744 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:34.744 lat (msec) : 50=100.00% 00:30:34.744 cpu : usr=94.93%, sys=4.80%, ctx=18, majf=0, minf=255 00:30:34.744 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:34.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.744 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.744 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:34.744 00:30:34.744 Run status group 0 (all jobs): 00:30:34.744 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10032-10032msec 00:30:34.744 09:45:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:34.745 09:45:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:34.745 09:45:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:34.745 09:45:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:34.745 09:45:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:34.745 09:45:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:34.745 09:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.745 09:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:34.745 09:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.745 09:45:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:34.745 09:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.745 09:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:34.745 09:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.745 00:30:34.745 real 0m11.171s 00:30:34.745 user 0m19.021s 00:30:34.745 sys 0m0.899s 00:30:34.745 09:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:34.745 09:45:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:34.745 ************************************ 00:30:34.745 END TEST fio_dif_1_default 00:30:34.745 ************************************ 00:30:34.745 09:45:04 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:34.745 09:45:04 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:34.745 09:45:04 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:34.745 09:45:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:34.745 ************************************ 00:30:34.745 START TEST fio_dif_1_multi_subsystems 00:30:34.745 ************************************ 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.745 bdev_null0 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.745 [2024-06-11 09:45:05.066652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.745 bdev_null1 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:34.745 { 00:30:34.745 "params": { 00:30:34.745 "name": "Nvme$subsystem", 00:30:34.745 "trtype": "$TEST_TRANSPORT", 00:30:34.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.745 "adrfam": "ipv4", 00:30:34.745 "trsvcid": "$NVMF_PORT", 00:30:34.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.745 "hdgst": ${hdgst:-false}, 00:30:34.745 "ddgst": ${ddgst:-false} 00:30:34.745 }, 00:30:34.745 "method": "bdev_nvme_attach_controller" 00:30:34.745 } 00:30:34.745 EOF 00:30:34.745 )") 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:34.745 { 00:30:34.745 "params": { 00:30:34.745 "name": "Nvme$subsystem", 00:30:34.745 "trtype": "$TEST_TRANSPORT", 00:30:34.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.745 "adrfam": "ipv4", 00:30:34.745 "trsvcid": "$NVMF_PORT", 00:30:34.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.745 "hdgst": ${hdgst:-false}, 00:30:34.745 "ddgst": ${ddgst:-false} 00:30:34.745 }, 00:30:34.745 "method": "bdev_nvme_attach_controller" 00:30:34.745 } 00:30:34.745 EOF 00:30:34.745 )") 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:34.745 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:34.746 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:34.746 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:34.746 "params": { 00:30:34.746 "name": "Nvme0", 00:30:34.746 "trtype": "tcp", 00:30:34.746 "traddr": "10.0.0.2", 00:30:34.746 "adrfam": "ipv4", 00:30:34.746 "trsvcid": "4420", 00:30:34.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:34.746 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:34.746 "hdgst": false, 00:30:34.746 "ddgst": false 00:30:34.746 }, 00:30:34.746 "method": "bdev_nvme_attach_controller" 00:30:34.746 },{ 00:30:34.746 "params": { 00:30:34.746 "name": "Nvme1", 00:30:34.746 "trtype": "tcp", 00:30:34.746 "traddr": "10.0.0.2", 00:30:34.746 "adrfam": "ipv4", 00:30:34.746 "trsvcid": "4420", 00:30:34.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.746 "hdgst": false, 00:30:34.746 "ddgst": false 00:30:34.746 }, 00:30:34.746 "method": "bdev_nvme_attach_controller" 00:30:34.746 }' 00:30:34.746 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:34.746 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:34.746 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.746 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.746 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:34.746 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:34.746 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:34.746 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:34.746 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:34.746 09:45:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.746 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:34.746 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:34.746 fio-3.35 00:30:34.746 Starting 2 threads 00:30:34.746 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.746 00:30:44.746 filename0: (groupid=0, jobs=1): err= 0: pid=1350304: Tue Jun 11 09:45:16 2024 00:30:44.746 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10042msec) 00:30:44.747 slat (nsec): min=8196, max=62937, avg=9588.02, stdev=4311.43 00:30:44.747 clat (usec): min=41812, max=43369, avg=41988.32, stdev=134.01 00:30:44.747 lat (usec): min=41820, max=43409, avg=41997.91, stdev=133.89 00:30:44.747 clat percentiles (usec): 00:30:44.747 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:44.747 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:44.747 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:44.747 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:30:44.747 | 99.99th=[43254] 00:30:44.747 bw ( KiB/s): min= 352, max= 384, per=49.89%, avg=380.80, stdev= 9.85, samples=20 00:30:44.747 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:44.747 lat (msec) : 50=100.00% 00:30:44.747 cpu : usr=96.66%, sys=3.11%, ctx=12, majf=0, minf=148 00:30:44.747 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:44.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.747 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:44.747 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:44.747 filename1: (groupid=0, jobs=1): err= 0: pid=1350305: Tue Jun 11 09:45:16 2024 00:30:44.747 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10040msec) 00:30:44.747 slat (nsec): min=8192, max=60437, avg=9559.30, stdev=4237.05 00:30:44.747 clat (usec): min=41072, max=44177, avg=41978.96, stdev=184.65 00:30:44.747 lat (usec): min=41080, max=44213, avg=41988.52, stdev=184.79 00:30:44.747 clat percentiles (usec): 00:30:44.747 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:30:44.747 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:44.747 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:44.747 | 99.00th=[42206], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:30:44.747 | 99.99th=[44303] 00:30:44.747 bw ( KiB/s): min= 352, max= 384, per=49.89%, avg=380.80, stdev= 9.85, samples=20 00:30:44.747 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:44.747 lat (msec) : 50=100.00% 00:30:44.747 cpu : usr=96.82%, sys=2.94%, ctx=18, majf=0, minf=129 00:30:44.747 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:44.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:44.747 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:44.747 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:44.747 00:30:44.747 Run status group 0 (all jobs): 00:30:44.747 READ: bw=762KiB/s (780kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=7648KiB (7832kB), run=10040-10042msec 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:44.747 00:30:44.747 real 0m11.533s 00:30:44.747 user 0m36.639s 00:30:44.747 sys 0m0.989s 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:44.747 09:45:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:44.747 ************************************ 00:30:44.747 END TEST fio_dif_1_multi_subsystems 00:30:44.747 ************************************ 00:30:45.008 09:45:16 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:45.009 09:45:16 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:45.009 09:45:16 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:45.009 09:45:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:45.009 ************************************ 00:30:45.009 START TEST fio_dif_rand_params 00:30:45.009 ************************************ 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.009 bdev_null0 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:45.009 [2024-06-11 09:45:16.681872] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.009 { 00:30:45.009 "params": { 00:30:45.009 "name": "Nvme$subsystem", 00:30:45.009 "trtype": "$TEST_TRANSPORT", 00:30:45.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.009 "adrfam": "ipv4", 00:30:45.009 "trsvcid": "$NVMF_PORT", 00:30:45.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.009 "hdgst": ${hdgst:-false}, 00:30:45.009 "ddgst": ${ddgst:-false} 00:30:45.009 }, 00:30:45.009 "method": "bdev_nvme_attach_controller" 00:30:45.009 } 00:30:45.009 EOF 00:30:45.009 )") 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:45.009 "params": { 00:30:45.009 "name": "Nvme0", 00:30:45.009 "trtype": "tcp", 00:30:45.009 "traddr": "10.0.0.2", 00:30:45.009 "adrfam": "ipv4", 00:30:45.009 "trsvcid": "4420", 00:30:45.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:45.009 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:45.009 "hdgst": false, 00:30:45.009 "ddgst": false 00:30:45.009 }, 00:30:45.009 "method": "bdev_nvme_attach_controller" 00:30:45.009 }' 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:45.009 09:45:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:45.580 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:45.580 ... 00:30:45.580 fio-3.35 00:30:45.580 Starting 3 threads 00:30:45.580 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.871 00:30:50.871 filename0: (groupid=0, jobs=1): err= 0: pid=1353170: Tue Jun 11 09:45:22 2024 00:30:50.871 read: IOPS=186, BW=23.3MiB/s (24.4MB/s)(117MiB/5016msec) 00:30:50.871 slat (nsec): min=8223, max=43271, avg=8950.39, stdev=1663.72 00:30:50.871 clat (usec): min=4688, max=93433, avg=16096.54, stdev=16501.56 00:30:50.871 lat (usec): min=4697, max=93442, avg=16105.49, stdev=16501.49 00:30:50.871 clat percentiles (usec): 00:30:50.871 | 1.00th=[ 5211], 5.00th=[ 5800], 10.00th=[ 6259], 20.00th=[ 7373], 00:30:50.871 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[ 9896], 60.00th=[10552], 00:30:50.871 | 70.00th=[11338], 80.00th=[13304], 90.00th=[47973], 95.00th=[50594], 00:30:50.871 | 99.00th=[87557], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:30:50.871 | 99.99th=[93848] 00:30:50.871 bw ( KiB/s): min=17920, max=30720, per=34.98%, avg=23833.60, stdev=4039.52, samples=10 00:30:50.871 iops : min= 140, max= 240, avg=186.20, stdev=31.56, samples=10 00:30:50.871 lat (msec) : 10=53.00%, 20=30.84%, 50=10.81%, 100=5.35% 00:30:50.871 cpu : usr=96.85%, sys=2.89%, ctx=10, majf=0, minf=112 00:30:50.871 IO depths : 1=2.7%, 2=97.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.871 issued rwts: total=934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.871 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:50.871 filename0: (groupid=0, jobs=1): err= 0: pid=1353171: Tue Jun 11 09:45:22 2024 00:30:50.871 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(127MiB/5037msec) 00:30:50.871 slat (nsec): min=8231, max=32766, avg=9001.58, stdev=1099.02 00:30:50.871 clat (usec): min=5065, max=91308, avg=14813.78, stdev=14380.21 00:30:50.871 lat (usec): min=5073, max=91316, avg=14822.79, stdev=14380.28 00:30:50.871 clat percentiles (usec): 00:30:50.871 | 1.00th=[ 5538], 5.00th=[ 5932], 10.00th=[ 6259], 20.00th=[ 7308], 00:30:50.871 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[10683], 00:30:50.871 | 70.00th=[11994], 80.00th=[13566], 90.00th=[47449], 95.00th=[50070], 00:30:50.871 | 99.00th=[53216], 99.50th=[57410], 99.90th=[90702], 99.95th=[91751], 00:30:50.871 | 99.99th=[91751] 00:30:50.871 bw ( KiB/s): min=17152, max=46336, per=38.18%, avg=26009.60, stdev=9408.52, samples=10 00:30:50.871 iops : min= 134, max= 362, avg=203.20, stdev=73.50, samples=10 00:30:50.871 lat (msec) : 10=53.68%, 20=33.27%, 50=8.54%, 100=4.51% 00:30:50.871 cpu : usr=96.86%, sys=2.86%, ctx=11, majf=0, minf=76 00:30:50.871 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.871 issued rwts: total=1019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.871 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:50.871 filename0: (groupid=0, jobs=1): err= 0: pid=1353172: Tue Jun 11 09:45:22 2024 00:30:50.871 read: IOPS=144, BW=18.1MiB/s (19.0MB/s)(91.0MiB/5022msec) 00:30:50.871 slat (nsec): min=8219, max=45322, avg=8938.52, stdev=1644.48 00:30:50.871 clat (usec): min=6391, max=91403, avg=20678.29, stdev=17685.82 00:30:50.871 lat (usec): min=6400, max=91412, avg=20687.23, stdev=17685.77 00:30:50.871 clat percentiles (usec): 00:30:50.871 | 1.00th=[ 6915], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[ 9241], 00:30:50.871 | 30.00th=[10683], 40.00th=[11600], 50.00th=[12649], 60.00th=[13960], 00:30:50.871 | 70.00th=[15795], 80.00th=[49021], 90.00th=[52167], 95.00th=[54264], 00:30:50.871 | 99.00th=[56886], 99.50th=[89654], 99.90th=[91751], 99.95th=[91751], 00:30:50.871 | 99.99th=[91751] 00:30:50.871 bw ( KiB/s): min=13056, max=26880, per=27.24%, avg=18560.00, stdev=4455.76, samples=10 00:30:50.871 iops : min= 102, max= 210, avg=145.00, stdev=34.81, samples=10 00:30:50.871 lat (msec) : 10=24.73%, 20=53.16%, 50=5.91%, 100=16.21% 00:30:50.871 cpu : usr=96.22%, sys=3.53%, ctx=10, majf=0, minf=74 00:30:50.871 IO depths : 1=5.1%, 2=94.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:50.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:50.871 issued rwts: total=728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:50.871 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:50.871 00:30:50.871 Run status group 0 (all jobs): 00:30:50.871 READ: bw=66.5MiB/s (69.8MB/s), 18.1MiB/s-25.3MiB/s (19.0MB/s-26.5MB/s), io=335MiB (351MB), run=5016-5037msec 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 bdev_null0 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 [2024-06-11 09:45:22.781812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 bdev_null1 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 bdev_null2 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:51.133 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:51.134 { 00:30:51.134 "params": { 00:30:51.134 "name": "Nvme$subsystem", 00:30:51.134 "trtype": "$TEST_TRANSPORT", 00:30:51.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.134 "adrfam": "ipv4", 00:30:51.134 "trsvcid": "$NVMF_PORT", 00:30:51.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.134 "hdgst": ${hdgst:-false}, 00:30:51.134 "ddgst": ${ddgst:-false} 00:30:51.134 }, 00:30:51.134 "method": "bdev_nvme_attach_controller" 00:30:51.134 } 00:30:51.134 EOF 00:30:51.134 )") 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:51.134 { 00:30:51.134 "params": { 00:30:51.134 "name": "Nvme$subsystem", 00:30:51.134 "trtype": "$TEST_TRANSPORT", 00:30:51.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.134 "adrfam": "ipv4", 00:30:51.134 "trsvcid": "$NVMF_PORT", 00:30:51.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.134 "hdgst": ${hdgst:-false}, 00:30:51.134 "ddgst": ${ddgst:-false} 00:30:51.134 }, 00:30:51.134 "method": "bdev_nvme_attach_controller" 00:30:51.134 } 00:30:51.134 EOF 00:30:51.134 )") 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:51.134 { 00:30:51.134 "params": { 00:30:51.134 "name": "Nvme$subsystem", 00:30:51.134 "trtype": "$TEST_TRANSPORT", 00:30:51.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.134 "adrfam": "ipv4", 00:30:51.134 "trsvcid": "$NVMF_PORT", 00:30:51.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.134 "hdgst": ${hdgst:-false}, 00:30:51.134 "ddgst": ${ddgst:-false} 00:30:51.134 }, 00:30:51.134 "method": "bdev_nvme_attach_controller" 00:30:51.134 } 00:30:51.134 EOF 00:30:51.134 )") 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:51.134 "params": { 00:30:51.134 "name": "Nvme0", 00:30:51.134 "trtype": "tcp", 00:30:51.134 "traddr": "10.0.0.2", 00:30:51.134 "adrfam": "ipv4", 00:30:51.134 "trsvcid": "4420", 00:30:51.134 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:51.134 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:51.134 "hdgst": false, 00:30:51.134 "ddgst": false 00:30:51.134 }, 00:30:51.134 "method": "bdev_nvme_attach_controller" 00:30:51.134 },{ 00:30:51.134 "params": { 00:30:51.134 "name": "Nvme1", 00:30:51.134 "trtype": "tcp", 00:30:51.134 "traddr": "10.0.0.2", 00:30:51.134 "adrfam": "ipv4", 00:30:51.134 "trsvcid": "4420", 00:30:51.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:51.134 "hdgst": false, 00:30:51.134 "ddgst": false 00:30:51.134 }, 00:30:51.134 "method": "bdev_nvme_attach_controller" 00:30:51.134 },{ 00:30:51.134 "params": { 00:30:51.134 "name": "Nvme2", 00:30:51.134 "trtype": "tcp", 00:30:51.134 "traddr": "10.0.0.2", 00:30:51.134 "adrfam": "ipv4", 00:30:51.134 "trsvcid": "4420", 00:30:51.134 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:51.134 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:51.134 "hdgst": false, 00:30:51.134 "ddgst": false 00:30:51.134 }, 00:30:51.134 "method": "bdev_nvme_attach_controller" 00:30:51.134 }' 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:51.134 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:51.418 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:51.418 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:51.418 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:51.418 09:45:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:51.682 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:51.682 ... 00:30:51.682 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:51.682 ... 00:30:51.682 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:51.682 ... 00:30:51.682 fio-3.35 00:30:51.682 Starting 24 threads 00:30:51.682 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.917 00:31:03.917 filename0: (groupid=0, jobs=1): err= 0: pid=1354442: Tue Jun 11 09:45:34 2024 00:31:03.917 read: IOPS=510, BW=2042KiB/s (2091kB/s)(20.0MiB/10028msec) 00:31:03.917 slat (nsec): min=8259, max=64957, avg=12309.45, stdev=6456.65 00:31:03.917 clat (usec): min=2586, max=39139, avg=31233.08, stdev=4527.92 00:31:03.917 lat (usec): min=2602, max=39148, avg=31245.39, stdev=4527.20 00:31:03.917 clat percentiles (usec): 00:31:03.917 | 1.00th=[ 3097], 5.00th=[30802], 10.00th=[31327], 20.00th=[31589], 00:31:03.917 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:03.917 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.917 | 99.00th=[33817], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:31:03.917 | 99.99th=[39060] 00:31:03.917 bw ( KiB/s): min= 1920, max= 2944, per=4.27%, avg=2041.60, stdev=221.61, samples=20 00:31:03.917 iops : min= 480, max= 736, avg=510.40, stdev=55.40, samples=20 00:31:03.917 lat (msec) : 4=1.88%, 10=0.62%, 20=0.31%, 50=97.19% 00:31:03.917 cpu : usr=98.93%, sys=0.69%, ctx=71, majf=0, minf=61 00:31:03.917 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:03.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.917 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.917 issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.917 filename0: (groupid=0, jobs=1): err= 0: pid=1354443: Tue Jun 11 09:45:34 2024 00:31:03.917 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10013msec) 00:31:03.917 slat (nsec): min=7502, max=89160, avg=13052.92, stdev=8912.84 00:31:03.917 clat (usec): min=15816, max=65191, avg=31760.32, stdev=3843.61 00:31:03.917 lat (usec): min=15826, max=65212, avg=31773.37, stdev=3843.59 00:31:03.917 clat percentiles (usec): 00:31:03.917 | 1.00th=[19792], 5.00th=[22938], 10.00th=[31065], 20.00th=[31589], 00:31:03.917 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:03.917 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32637], 95.00th=[36439], 00:31:03.917 | 99.00th=[44827], 99.50th=[48497], 99.90th=[56361], 99.95th=[56361], 00:31:03.917 | 99.99th=[65274] 00:31:03.917 bw ( KiB/s): min= 1795, max= 2160, per=4.19%, avg=2002.68, stdev=97.30, samples=19 00:31:03.917 iops : min= 448, max= 540, avg=500.63, stdev=24.41, samples=19 00:31:03.917 lat (msec) : 20=1.21%, 50=98.47%, 100=0.32% 00:31:03.917 cpu : usr=99.18%, sys=0.52%, ctx=17, majf=0, minf=31 00:31:03.917 IO depths : 1=4.0%, 2=9.6%, 4=22.7%, 8=55.1%, 16=8.6%, 32=0.0%, >=64=0.0% 00:31:03.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.917 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.917 issued rwts: total=5028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.917 filename0: (groupid=0, jobs=1): err= 0: pid=1354444: Tue Jun 11 09:45:34 2024 00:31:03.917 read: IOPS=499, BW=1997KiB/s (2045kB/s)(19.5MiB/10001msec) 00:31:03.917 slat (usec): min=8, max=103, avg=20.44, stdev=11.29 00:31:03.917 clat (usec): min=17820, max=39086, avg=31870.86, stdev=988.86 00:31:03.917 lat (usec): min=17830, max=39129, avg=31891.30, stdev=988.82 00:31:03.917 clat percentiles (usec): 00:31:03.917 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:31:03.917 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.917 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.917 | 99.00th=[33817], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:31:03.917 | 99.99th=[39060] 00:31:03.917 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1994.11, stdev=64.93, samples=19 00:31:03.917 iops : min= 480, max= 512, avg=498.53, stdev=16.23, samples=19 00:31:03.917 lat (msec) : 20=0.32%, 50=99.68% 00:31:03.917 cpu : usr=97.31%, sys=1.46%, ctx=194, majf=0, minf=31 00:31:03.917 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:03.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.917 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.917 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.917 filename0: (groupid=0, jobs=1): err= 0: pid=1354445: Tue Jun 11 09:45:34 2024 00:31:03.917 read: IOPS=499, BW=1997KiB/s (2045kB/s)(19.5MiB/10012msec) 00:31:03.917 slat (nsec): min=5847, max=86949, avg=24496.27, stdev=14754.94 00:31:03.917 clat (usec): min=11478, max=59900, avg=31842.85, stdev=2391.87 00:31:03.917 lat (usec): min=11487, max=59916, avg=31867.35, stdev=2392.49 00:31:03.917 clat percentiles (usec): 00:31:03.917 | 1.00th=[21890], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:31:03.917 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.917 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.917 | 99.00th=[40633], 99.50th=[48497], 99.90th=[50594], 99.95th=[50594], 00:31:03.918 | 99.99th=[60031] 00:31:03.918 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1989.89, stdev=63.82, samples=19 00:31:03.918 iops : min= 480, max= 512, avg=497.47, stdev=15.96, samples=19 00:31:03.918 lat (msec) : 20=0.64%, 50=99.04%, 100=0.32% 00:31:03.918 cpu : usr=97.63%, sys=1.23%, ctx=172, majf=0, minf=37 00:31:03.918 IO depths : 1=5.6%, 2=11.8%, 4=24.6%, 8=51.1%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:03.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.918 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.918 issued rwts: total=4998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.918 filename0: (groupid=0, jobs=1): err= 0: pid=1354446: Tue Jun 11 09:45:34 2024 00:31:03.918 read: IOPS=498, BW=1993KiB/s (2041kB/s)(19.5MiB/10010msec) 00:31:03.918 slat (nsec): min=6022, max=92089, avg=22112.88, stdev=16302.42 00:31:03.918 clat (usec): min=10395, max=58637, avg=31991.98, stdev=2271.74 00:31:03.918 lat (usec): min=10404, max=58655, avg=32014.10, stdev=2271.16 00:31:03.918 clat percentiles (usec): 00:31:03.918 | 1.00th=[25035], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:03.918 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:03.918 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.918 | 99.00th=[38536], 99.50th=[43254], 99.90th=[58459], 99.95th=[58459], 00:31:03.918 | 99.99th=[58459] 00:31:03.918 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1983.16, stdev=58.78, samples=19 00:31:03.918 iops : min= 448, max= 512, avg=495.79, stdev=14.70, samples=19 00:31:03.918 lat (msec) : 20=0.48%, 50=99.20%, 100=0.32% 00:31:03.918 cpu : usr=98.05%, sys=1.17%, ctx=52, majf=0, minf=29 00:31:03.918 IO depths : 1=1.5%, 2=3.1%, 4=6.7%, 8=73.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:31:03.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.918 complete : 0=0.0%, 4=90.6%, 8=7.9%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.918 issued rwts: total=4988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.918 filename0: (groupid=0, jobs=1): err= 0: pid=1354448: Tue Jun 11 09:45:34 2024 00:31:03.918 read: IOPS=498, BW=1993KiB/s (2041kB/s)(19.5MiB/10020msec) 00:31:03.918 slat (nsec): min=7116, max=84639, avg=20165.16, stdev=13096.52 00:31:03.918 clat (usec): min=18359, max=84787, avg=31938.18, stdev=2766.77 00:31:03.918 lat (usec): min=18371, max=84807, avg=31958.34, stdev=2766.35 00:31:03.918 clat percentiles (usec): 00:31:03.918 | 1.00th=[21890], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:31:03.918 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.918 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.918 | 99.00th=[41681], 99.50th=[45876], 99.90th=[65799], 99.95th=[65799], 00:31:03.918 | 99.99th=[84411] 00:31:03.918 bw ( KiB/s): min= 1795, max= 2176, per=4.16%, avg=1988.50, stdev=86.55, samples=20 00:31:03.918 iops : min= 448, max= 544, avg=497.05, stdev=21.72, samples=20 00:31:03.918 lat (msec) : 20=0.30%, 50=99.38%, 100=0.32% 00:31:03.918 cpu : usr=95.76%, sys=2.25%, ctx=104, majf=0, minf=24 00:31:03.918 IO depths : 1=5.8%, 2=11.9%, 4=24.5%, 8=51.1%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:03.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.918 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.918 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.918 filename0: (groupid=0, jobs=1): err= 0: pid=1354449: Tue Jun 11 09:45:34 2024 00:31:03.918 read: IOPS=498, BW=1994KiB/s (2041kB/s)(19.5MiB/10016msec) 00:31:03.918 slat (nsec): min=8243, max=88461, avg=22243.62, stdev=13553.48 00:31:03.918 clat (usec): min=15584, max=52156, avg=31900.91, stdev=1596.42 00:31:03.918 lat (usec): min=15625, max=52188, avg=31923.16, stdev=1595.74 00:31:03.918 clat percentiles (usec): 00:31:03.918 | 1.00th=[30540], 5.00th=[31065], 10.00th=[31589], 20.00th=[31589], 00:31:03.918 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.918 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.918 | 99.00th=[34341], 99.50th=[36963], 99.90th=[52167], 99.95th=[52167], 00:31:03.918 | 99.99th=[52167] 00:31:03.918 bw ( KiB/s): min= 1795, max= 2052, per=4.17%, avg=1991.25, stdev=77.05, samples=20 00:31:03.918 iops : min= 448, max= 513, avg=497.70, stdev=19.43, samples=20 00:31:03.918 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:31:03.918 cpu : usr=98.10%, sys=1.00%, ctx=34, majf=0, minf=33 00:31:03.918 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:03.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.918 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.918 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.918 filename0: (groupid=0, jobs=1): err= 0: pid=1354450: Tue Jun 11 09:45:34 2024 00:31:03.918 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10009msec) 00:31:03.918 slat (nsec): min=5834, max=94471, avg=26636.97, stdev=15196.96 00:31:03.918 clat (usec): min=11052, max=57829, avg=31838.09, stdev=2074.52 00:31:03.918 lat (usec): min=11070, max=57845, avg=31864.72, stdev=2074.70 00:31:03.918 clat percentiles (usec): 00:31:03.918 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:03.918 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.918 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.918 | 99.00th=[33817], 99.50th=[38536], 99.90th=[57934], 99.95th=[57934], 00:31:03.918 | 99.99th=[57934] 00:31:03.918 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1987.53, stdev=77.89, samples=19 00:31:03.918 iops : min= 448, max= 512, avg=496.84, stdev=19.58, samples=19 00:31:03.918 lat (msec) : 20=0.64%, 50=99.04%, 100=0.32% 00:31:03.918 cpu : usr=99.11%, sys=0.58%, ctx=16, majf=0, minf=36 00:31:03.918 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:03.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.918 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.918 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.918 filename1: (groupid=0, jobs=1): err= 0: pid=1354451: Tue Jun 11 09:45:34 2024 00:31:03.918 read: IOPS=498, BW=1995KiB/s (2042kB/s)(19.5MiB/10011msec) 00:31:03.918 slat (nsec): min=5840, max=90921, avg=27235.54, stdev=16192.64 00:31:03.918 clat (usec): min=11063, max=59338, avg=31814.39, stdev=2136.33 00:31:03.918 lat (usec): min=11073, max=59357, avg=31841.63, stdev=2136.83 00:31:03.918 clat percentiles (usec): 00:31:03.918 | 1.00th=[30278], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:31:03.918 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.918 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32375], 00:31:03.918 | 99.00th=[33817], 99.50th=[38536], 99.90th=[59507], 99.95th=[59507], 00:31:03.918 | 99.99th=[59507] 00:31:03.918 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1987.53, stdev=77.89, samples=19 00:31:03.918 iops : min= 448, max= 512, avg=496.84, stdev=19.58, samples=19 00:31:03.918 lat (msec) : 20=0.64%, 50=99.04%, 100=0.32% 00:31:03.918 cpu : usr=99.17%, sys=0.51%, ctx=32, majf=0, minf=34 00:31:03.918 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:03.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.918 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.918 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.918 filename1: (groupid=0, jobs=1): err= 0: pid=1354452: Tue Jun 11 09:45:34 2024 00:31:03.918 read: IOPS=503, BW=2014KiB/s (2063kB/s)(19.7MiB/10012msec) 00:31:03.918 slat (usec): min=8, max=151, avg=18.52, stdev=13.12 00:31:03.918 clat (usec): min=16448, max=49638, avg=31612.72, stdev=2211.89 00:31:03.918 lat (usec): min=16458, max=49675, avg=31631.24, stdev=2213.30 00:31:03.918 clat percentiles (usec): 00:31:03.918 | 1.00th=[21103], 5.00th=[30540], 10.00th=[31327], 20.00th=[31589], 00:31:03.918 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.918 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.918 | 99.00th=[37487], 99.50th=[38536], 99.90th=[49546], 99.95th=[49546], 00:31:03.918 | 99.99th=[49546] 00:31:03.918 bw ( KiB/s): min= 1920, max= 2272, per=4.21%, avg=2010.40, stdev=90.40, samples=20 00:31:03.918 iops : min= 480, max= 568, avg=502.60, stdev=22.60, samples=20 00:31:03.918 lat (msec) : 20=0.36%, 50=99.64% 00:31:03.918 cpu : usr=99.05%, sys=0.62%, ctx=29, majf=0, minf=50 00:31:03.918 IO depths : 1=5.9%, 2=11.9%, 4=24.1%, 8=51.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:03.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.918 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.918 issued rwts: total=5042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.918 filename1: (groupid=0, jobs=1): err= 0: pid=1354453: Tue Jun 11 09:45:34 2024 00:31:03.918 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10016msec) 00:31:03.918 slat (nsec): min=7006, max=40662, avg=9922.35, stdev=3028.41 00:31:03.918 clat (usec): min=13884, max=37215, avg=31910.47, stdev=1270.99 00:31:03.918 lat (usec): min=13893, max=37224, avg=31920.39, stdev=1270.58 00:31:03.918 clat percentiles (usec): 00:31:03.918 | 1.00th=[28443], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:03.918 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:03.918 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.918 | 99.00th=[33424], 99.50th=[34866], 99.90th=[36963], 99.95th=[36963], 00:31:03.918 | 99.99th=[36963] 00:31:03.918 bw ( KiB/s): min= 1920, max= 2048, per=4.18%, avg=1996.80, stdev=64.34, samples=20 00:31:03.918 iops : min= 480, max= 512, avg=499.20, stdev=16.08, samples=20 00:31:03.919 lat (msec) : 20=0.36%, 50=99.64% 00:31:03.919 cpu : usr=99.02%, sys=0.64%, ctx=53, majf=0, minf=44 00:31:03.919 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:03.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.919 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.919 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.919 filename1: (groupid=0, jobs=1): err= 0: pid=1354454: Tue Jun 11 09:45:34 2024 00:31:03.919 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10010msec) 00:31:03.919 slat (nsec): min=8190, max=78214, avg=19181.71, stdev=11942.15 00:31:03.919 clat (usec): min=16096, max=46209, avg=31924.57, stdev=1357.82 00:31:03.919 lat (usec): min=16107, max=46231, avg=31943.75, stdev=1356.64 00:31:03.919 clat percentiles (usec): 00:31:03.919 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:03.919 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.919 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.919 | 99.00th=[34341], 99.50th=[36963], 99.90th=[46400], 99.95th=[46400], 00:31:03.919 | 99.99th=[46400] 00:31:03.919 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1987.37, stdev=65.66, samples=19 00:31:03.919 iops : min= 480, max= 512, avg=496.84, stdev=16.42, samples=19 00:31:03.919 lat (msec) : 20=0.32%, 50=99.68% 00:31:03.919 cpu : usr=99.04%, sys=0.64%, ctx=19, majf=0, minf=30 00:31:03.919 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:03.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.919 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.919 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.919 filename1: (groupid=0, jobs=1): err= 0: pid=1354455: Tue Jun 11 09:45:34 2024 00:31:03.919 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.3MiB/10029msec) 00:31:03.919 slat (nsec): min=5716, max=69670, avg=13860.24, stdev=8772.76 00:31:03.919 clat (usec): min=15592, max=50377, avg=32358.88, stdev=4241.90 00:31:03.919 lat (usec): min=15603, max=50398, avg=32372.74, stdev=4242.29 00:31:03.919 clat percentiles (usec): 00:31:03.919 | 1.00th=[21103], 5.00th=[25560], 10.00th=[31065], 20.00th=[31851], 00:31:03.919 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:03.919 | 70.00th=[32113], 80.00th=[32375], 90.00th=[36963], 95.00th=[41681], 00:31:03.919 | 99.00th=[47973], 99.50th=[48497], 99.90th=[48497], 99.95th=[50070], 00:31:03.919 | 99.99th=[50594] 00:31:03.919 bw ( KiB/s): min= 1816, max= 2104, per=4.12%, avg=1971.20, stdev=71.05, samples=20 00:31:03.919 iops : min= 454, max= 526, avg=492.80, stdev=17.76, samples=20 00:31:03.919 lat (msec) : 20=0.45%, 50=99.47%, 100=0.08% 00:31:03.919 cpu : usr=98.73%, sys=0.94%, ctx=51, majf=0, minf=38 00:31:03.919 IO depths : 1=1.6%, 2=3.4%, 4=11.0%, 8=71.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:31:03.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.919 complete : 0=0.0%, 4=90.7%, 8=5.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.919 issued rwts: total=4942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.919 filename1: (groupid=0, jobs=1): err= 0: pid=1354456: Tue Jun 11 09:45:34 2024 00:31:03.919 read: IOPS=510, BW=2042KiB/s (2092kB/s)(20.0MiB/10027msec) 00:31:03.919 slat (nsec): min=8246, max=54648, avg=11660.61, stdev=5349.50 00:31:03.919 clat (usec): min=2552, max=39084, avg=31230.40, stdev=4479.72 00:31:03.919 lat (usec): min=2569, max=39094, avg=31242.06, stdev=4478.74 00:31:03.919 clat percentiles (usec): 00:31:03.919 | 1.00th=[ 3032], 5.00th=[31065], 10.00th=[31327], 20.00th=[31851], 00:31:03.919 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:03.919 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.919 | 99.00th=[33817], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:31:03.919 | 99.99th=[39060] 00:31:03.919 bw ( KiB/s): min= 1920, max= 2821, per=4.27%, avg=2041.85, stdev=193.51, samples=20 00:31:03.919 iops : min= 480, max= 705, avg=510.45, stdev=48.32, samples=20 00:31:03.919 lat (msec) : 4=1.56%, 10=0.86%, 20=0.08%, 50=97.50% 00:31:03.919 cpu : usr=98.78%, sys=0.83%, ctx=57, majf=0, minf=38 00:31:03.919 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:03.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.919 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.919 issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.919 filename1: (groupid=0, jobs=1): err= 0: pid=1354457: Tue Jun 11 09:45:34 2024 00:31:03.919 read: IOPS=506, BW=2026KiB/s (2075kB/s)(19.8MiB/10010msec) 00:31:03.919 slat (nsec): min=6870, max=91244, avg=22147.12, stdev=14822.98 00:31:03.919 clat (usec): min=17242, max=61504, avg=31412.33, stdev=3579.56 00:31:03.919 lat (usec): min=17251, max=61523, avg=31434.48, stdev=3581.58 00:31:03.919 clat percentiles (usec): 00:31:03.919 | 1.00th=[20579], 5.00th=[22938], 10.00th=[27132], 20.00th=[31327], 00:31:03.919 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.919 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32637], 95.00th=[36963], 00:31:03.919 | 99.00th=[42730], 99.50th=[45876], 99.90th=[47973], 99.95th=[61604], 00:31:03.919 | 99.99th=[61604] 00:31:03.919 bw ( KiB/s): min= 1899, max= 2240, per=4.24%, avg=2027.11, stdev=91.41, samples=19 00:31:03.919 iops : min= 474, max= 560, avg=506.74, stdev=22.91, samples=19 00:31:03.919 lat (msec) : 20=0.67%, 50=99.27%, 100=0.06% 00:31:03.919 cpu : usr=98.89%, sys=0.76%, ctx=62, majf=0, minf=40 00:31:03.919 IO depths : 1=3.5%, 2=7.3%, 4=17.9%, 8=61.3%, 16=10.1%, 32=0.0%, >=64=0.0% 00:31:03.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.919 complete : 0=0.0%, 4=92.5%, 8=2.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.919 issued rwts: total=5070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.919 filename1: (groupid=0, jobs=1): err= 0: pid=1354458: Tue Jun 11 09:45:34 2024 00:31:03.919 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10008msec) 00:31:03.919 slat (nsec): min=6141, max=87546, avg=25703.48, stdev=13720.52 00:31:03.919 clat (usec): min=10598, max=56384, avg=31838.73, stdev=2013.40 00:31:03.919 lat (usec): min=10607, max=56401, avg=31864.43, stdev=2013.71 00:31:03.919 clat percentiles (usec): 00:31:03.919 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:03.919 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.919 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.919 | 99.00th=[33817], 99.50th=[38536], 99.90th=[56361], 99.95th=[56361], 00:31:03.919 | 99.99th=[56361] 00:31:03.919 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1987.37, stdev=78.31, samples=19 00:31:03.919 iops : min= 448, max= 512, avg=496.84, stdev=19.58, samples=19 00:31:03.919 lat (msec) : 20=0.60%, 50=99.08%, 100=0.32% 00:31:03.919 cpu : usr=98.81%, sys=0.74%, ctx=35, majf=0, minf=31 00:31:03.919 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:03.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.919 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.919 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.919 filename2: (groupid=0, jobs=1): err= 0: pid=1354459: Tue Jun 11 09:45:34 2024 00:31:03.919 read: IOPS=498, BW=1993KiB/s (2040kB/s)(19.5MiB/10021msec) 00:31:03.919 slat (nsec): min=5886, max=73389, avg=23853.24, stdev=12973.64 00:31:03.919 clat (usec): min=21724, max=48216, avg=31896.53, stdev=1242.19 00:31:03.919 lat (usec): min=21733, max=48232, avg=31920.38, stdev=1241.67 00:31:03.919 clat percentiles (usec): 00:31:03.919 | 1.00th=[30540], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:31:03.919 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.919 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.919 | 99.00th=[33817], 99.50th=[39060], 99.90th=[47973], 99.95th=[47973], 00:31:03.919 | 99.99th=[47973] 00:31:03.919 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1988.35, stdev=64.06, samples=20 00:31:03.919 iops : min= 480, max= 512, avg=497.05, stdev=16.00, samples=20 00:31:03.919 lat (msec) : 50=100.00% 00:31:03.919 cpu : usr=99.11%, sys=0.57%, ctx=18, majf=0, minf=35 00:31:03.919 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:03.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.919 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.919 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.919 filename2: (groupid=0, jobs=1): err= 0: pid=1354461: Tue Jun 11 09:45:34 2024 00:31:03.919 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10010msec) 00:31:03.919 slat (nsec): min=7748, max=97764, avg=23753.54, stdev=17195.55 00:31:03.919 clat (usec): min=10398, max=58498, avg=31889.17, stdev=2134.21 00:31:03.919 lat (usec): min=10407, max=58520, avg=31912.92, stdev=2134.19 00:31:03.919 clat percentiles (usec): 00:31:03.919 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:03.919 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.919 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.920 | 99.00th=[34341], 99.50th=[38536], 99.90th=[58459], 99.95th=[58459], 00:31:03.920 | 99.99th=[58459] 00:31:03.920 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1987.37, stdev=78.31, samples=19 00:31:03.920 iops : min= 448, max= 512, avg=496.84, stdev=19.58, samples=19 00:31:03.920 lat (msec) : 20=0.64%, 50=99.04%, 100=0.32% 00:31:03.920 cpu : usr=99.15%, sys=0.53%, ctx=14, majf=0, minf=28 00:31:03.920 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:03.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.920 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.920 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.920 filename2: (groupid=0, jobs=1): err= 0: pid=1354462: Tue Jun 11 09:45:34 2024 00:31:03.920 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.5MiB/10048msec) 00:31:03.920 slat (nsec): min=5961, max=73101, avg=16640.45, stdev=10723.22 00:31:03.920 clat (usec): min=16150, max=56652, avg=32153.48, stdev=2173.84 00:31:03.920 lat (usec): min=16159, max=56671, avg=32170.12, stdev=2173.46 00:31:03.920 clat percentiles (usec): 00:31:03.920 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:03.920 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:03.920 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:31:03.920 | 99.00th=[41157], 99.50th=[50594], 99.90th=[56886], 99.95th=[56886], 00:31:03.920 | 99.99th=[56886] 00:31:03.920 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1980.63, stdev=53.75, samples=19 00:31:03.920 iops : min= 448, max= 512, avg=495.16, stdev=13.44, samples=19 00:31:03.920 lat (msec) : 20=0.24%, 50=99.24%, 100=0.52% 00:31:03.920 cpu : usr=99.14%, sys=0.53%, ctx=62, majf=0, minf=39 00:31:03.920 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=79.3%, 16=17.9%, 32=0.0%, >=64=0.0% 00:31:03.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.920 complete : 0=0.0%, 4=89.7%, 8=9.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.920 issued rwts: total=4982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.920 filename2: (groupid=0, jobs=1): err= 0: pid=1354463: Tue Jun 11 09:45:34 2024 00:31:03.920 read: IOPS=483, BW=1934KiB/s (1980kB/s)(18.9MiB/10008msec) 00:31:03.920 slat (nsec): min=6344, max=91148, avg=21569.61, stdev=14388.64 00:31:03.920 clat (usec): min=10379, max=56503, avg=32902.09, stdev=4106.76 00:31:03.920 lat (usec): min=10388, max=56519, avg=32923.66, stdev=4106.37 00:31:03.920 clat percentiles (usec): 00:31:03.920 | 1.00th=[22938], 5.00th=[30802], 10.00th=[31589], 20.00th=[31589], 00:31:03.920 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.920 | 70.00th=[32113], 80.00th=[32375], 90.00th=[40633], 95.00th=[42206], 00:31:03.920 | 99.00th=[44303], 99.50th=[45351], 99.90th=[56361], 99.95th=[56361], 00:31:03.920 | 99.99th=[56361] 00:31:03.920 bw ( KiB/s): min= 1536, max= 2096, per=4.01%, avg=1918.32, stdev=159.01, samples=19 00:31:03.920 iops : min= 384, max= 524, avg=479.58, stdev=39.75, samples=19 00:31:03.920 lat (msec) : 20=0.50%, 50=99.17%, 100=0.33% 00:31:03.920 cpu : usr=99.07%, sys=0.62%, ctx=39, majf=0, minf=44 00:31:03.920 IO depths : 1=4.5%, 2=9.1%, 4=21.1%, 8=56.7%, 16=8.6%, 32=0.0%, >=64=0.0% 00:31:03.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.920 complete : 0=0.0%, 4=93.4%, 8=1.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.920 issued rwts: total=4838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.920 filename2: (groupid=0, jobs=1): err= 0: pid=1354464: Tue Jun 11 09:45:34 2024 00:31:03.920 read: IOPS=498, BW=1993KiB/s (2040kB/s)(19.5MiB/10021msec) 00:31:03.920 slat (nsec): min=4740, max=80801, avg=13273.92, stdev=8392.88 00:31:03.920 clat (usec): min=19184, max=49794, avg=31987.71, stdev=1672.17 00:31:03.920 lat (usec): min=19193, max=49803, avg=32000.98, stdev=1671.47 00:31:03.920 clat percentiles (usec): 00:31:03.920 | 1.00th=[25035], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:03.920 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:03.920 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.920 | 99.00th=[36963], 99.50th=[44303], 99.90th=[44827], 99.95th=[48497], 00:31:03.920 | 99.99th=[49546] 00:31:03.920 bw ( KiB/s): min= 1920, max= 2117, per=4.17%, avg=1993.85, stdev=68.82, samples=20 00:31:03.920 iops : min= 480, max= 529, avg=498.45, stdev=17.18, samples=20 00:31:03.920 lat (msec) : 20=0.40%, 50=99.60% 00:31:03.920 cpu : usr=98.94%, sys=0.76%, ctx=16, majf=0, minf=38 00:31:03.920 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:03.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.920 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.920 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.920 filename2: (groupid=0, jobs=1): err= 0: pid=1354465: Tue Jun 11 09:45:34 2024 00:31:03.920 read: IOPS=498, BW=1994KiB/s (2041kB/s)(19.5MiB/10016msec) 00:31:03.920 slat (nsec): min=8260, max=86763, avg=22411.75, stdev=13883.12 00:31:03.920 clat (usec): min=16114, max=52098, avg=31888.00, stdev=1578.60 00:31:03.920 lat (usec): min=16163, max=52126, avg=31910.42, stdev=1577.87 00:31:03.920 clat percentiles (usec): 00:31:03.920 | 1.00th=[30540], 5.00th=[31065], 10.00th=[31589], 20.00th=[31589], 00:31:03.920 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.920 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.920 | 99.00th=[34341], 99.50th=[36963], 99.90th=[52167], 99.95th=[52167], 00:31:03.920 | 99.99th=[52167] 00:31:03.920 bw ( KiB/s): min= 1795, max= 2052, per=4.17%, avg=1991.25, stdev=77.05, samples=20 00:31:03.920 iops : min= 448, max= 513, avg=497.70, stdev=19.43, samples=20 00:31:03.920 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:31:03.920 cpu : usr=98.27%, sys=0.98%, ctx=38, majf=0, minf=28 00:31:03.920 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:03.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.920 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.920 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.920 filename2: (groupid=0, jobs=1): err= 0: pid=1354466: Tue Jun 11 09:45:34 2024 00:31:03.920 read: IOPS=499, BW=1997KiB/s (2045kB/s)(19.5MiB/10001msec) 00:31:03.920 slat (nsec): min=5062, max=76760, avg=24236.01, stdev=13661.30 00:31:03.920 clat (usec): min=18089, max=39115, avg=31840.80, stdev=980.86 00:31:03.920 lat (usec): min=18119, max=39127, avg=31865.04, stdev=980.84 00:31:03.920 clat percentiles (usec): 00:31:03.920 | 1.00th=[30540], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:31:03.920 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.920 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.920 | 99.00th=[33817], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:31:03.920 | 99.99th=[39060] 00:31:03.920 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1994.11, stdev=64.93, samples=19 00:31:03.920 iops : min= 480, max= 512, avg=498.53, stdev=16.23, samples=19 00:31:03.920 lat (msec) : 20=0.32%, 50=99.68% 00:31:03.920 cpu : usr=99.20%, sys=0.50%, ctx=12, majf=0, minf=42 00:31:03.920 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:03.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.920 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.920 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.920 filename2: (groupid=0, jobs=1): err= 0: pid=1354467: Tue Jun 11 09:45:34 2024 00:31:03.920 read: IOPS=499, BW=1997KiB/s (2045kB/s)(19.5MiB/10001msec) 00:31:03.920 slat (nsec): min=8218, max=76856, avg=19841.24, stdev=13608.08 00:31:03.920 clat (usec): min=13115, max=39043, avg=31900.01, stdev=1025.37 00:31:03.920 lat (usec): min=13124, max=39056, avg=31919.85, stdev=1024.29 00:31:03.920 clat percentiles (usec): 00:31:03.920 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:03.920 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:03.920 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:31:03.920 | 99.00th=[33817], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:31:03.920 | 99.99th=[39060] 00:31:03.920 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1994.11, stdev=64.93, samples=19 00:31:03.920 iops : min= 480, max= 512, avg=498.53, stdev=16.23, samples=19 00:31:03.920 lat (msec) : 20=0.32%, 50=99.68% 00:31:03.920 cpu : usr=99.19%, sys=0.51%, ctx=11, majf=0, minf=36 00:31:03.920 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:03.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.920 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.920 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:03.920 00:31:03.920 Run status group 0 (all jobs): 00:31:03.920 READ: bw=46.7MiB/s (48.9MB/s), 1934KiB/s-2042KiB/s (1980kB/s-2092kB/s), io=469MiB (492MB), run=10001-10048msec 00:31:03.920 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:03.920 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:03.920 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:03.920 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:03.920 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:03.920 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:03.920 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 bdev_null0 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 [2024-06-11 09:45:34.407276] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 bdev_null1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:03.921 { 00:31:03.921 "params": { 00:31:03.921 "name": "Nvme$subsystem", 00:31:03.921 "trtype": "$TEST_TRANSPORT", 00:31:03.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.921 "adrfam": "ipv4", 00:31:03.921 "trsvcid": "$NVMF_PORT", 00:31:03.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.921 "hdgst": ${hdgst:-false}, 00:31:03.921 "ddgst": ${ddgst:-false} 00:31:03.921 }, 00:31:03.921 "method": "bdev_nvme_attach_controller" 00:31:03.921 } 00:31:03.921 EOF 00:31:03.921 )") 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:03.921 09:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:03.922 { 00:31:03.922 "params": { 00:31:03.922 "name": "Nvme$subsystem", 00:31:03.922 "trtype": "$TEST_TRANSPORT", 00:31:03.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.922 "adrfam": "ipv4", 00:31:03.922 "trsvcid": "$NVMF_PORT", 00:31:03.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.922 "hdgst": ${hdgst:-false}, 00:31:03.922 "ddgst": ${ddgst:-false} 00:31:03.922 }, 00:31:03.922 "method": "bdev_nvme_attach_controller" 00:31:03.922 } 00:31:03.922 EOF 00:31:03.922 )") 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:03.922 "params": { 00:31:03.922 "name": "Nvme0", 00:31:03.922 "trtype": "tcp", 00:31:03.922 "traddr": "10.0.0.2", 00:31:03.922 "adrfam": "ipv4", 00:31:03.922 "trsvcid": "4420", 00:31:03.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.922 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:03.922 "hdgst": false, 00:31:03.922 "ddgst": false 00:31:03.922 }, 00:31:03.922 "method": "bdev_nvme_attach_controller" 00:31:03.922 },{ 00:31:03.922 "params": { 00:31:03.922 "name": "Nvme1", 00:31:03.922 "trtype": "tcp", 00:31:03.922 "traddr": "10.0.0.2", 00:31:03.922 "adrfam": "ipv4", 00:31:03.922 "trsvcid": "4420", 00:31:03.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:03.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:03.922 "hdgst": false, 00:31:03.922 "ddgst": false 00:31:03.922 }, 00:31:03.922 "method": "bdev_nvme_attach_controller" 00:31:03.922 }' 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:03.922 09:45:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.922 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:03.922 ... 00:31:03.922 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:03.922 ... 00:31:03.922 fio-3.35 00:31:03.922 Starting 4 threads 00:31:03.922 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.212 00:31:09.212 filename0: (groupid=0, jobs=1): err= 0: pid=1356878: Tue Jun 11 09:45:40 2024 00:31:09.212 read: IOPS=2071, BW=16.2MiB/s (17.0MB/s)(81.0MiB/5004msec) 00:31:09.212 slat (nsec): min=8179, max=67094, avg=9188.42, stdev=2955.11 00:31:09.212 clat (usec): min=1190, max=6416, avg=3838.43, stdev=615.06 00:31:09.212 lat (usec): min=1207, max=6424, avg=3847.62, stdev=615.22 00:31:09.212 clat percentiles (usec): 00:31:09.213 | 1.00th=[ 2769], 5.00th=[ 3294], 10.00th=[ 3425], 20.00th=[ 3523], 00:31:09.213 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3752], 00:31:09.213 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 5145], 95.00th=[ 5473], 00:31:09.213 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 5866], 99.95th=[ 6063], 00:31:09.213 | 99.99th=[ 6390] 00:31:09.213 bw ( KiB/s): min=15840, max=17600, per=25.02%, avg=16659.56, stdev=765.40, samples=9 00:31:09.213 iops : min= 1980, max= 2200, avg=2082.44, stdev=95.68, samples=9 00:31:09.213 lat (msec) : 2=0.17%, 4=86.44%, 10=13.39% 00:31:09.213 cpu : usr=96.56%, sys=3.20%, ctx=5, majf=0, minf=118 00:31:09.213 IO depths : 1=0.1%, 2=0.4%, 4=68.9%, 8=30.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.213 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.213 issued rwts: total=10366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.213 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:09.213 filename0: (groupid=0, jobs=1): err= 0: pid=1356879: Tue Jun 11 09:45:40 2024 00:31:09.213 read: IOPS=2061, BW=16.1MiB/s (16.9MB/s)(80.6MiB/5006msec) 00:31:09.213 slat (nsec): min=8174, max=78157, avg=9291.18, stdev=3242.96 00:31:09.213 clat (usec): min=1529, max=6333, avg=3853.76, stdev=601.86 00:31:09.213 lat (usec): min=1537, max=6341, avg=3863.05, stdev=601.56 00:31:09.213 clat percentiles (usec): 00:31:09.213 | 1.00th=[ 2802], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3523], 00:31:09.213 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3785], 00:31:09.213 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 5145], 95.00th=[ 5473], 00:31:09.213 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 6063], 99.95th=[ 6194], 00:31:09.213 | 99.99th=[ 6325] 00:31:09.213 bw ( KiB/s): min=16128, max=17504, per=24.78%, avg=16502.40, stdev=502.48, samples=10 00:31:09.213 iops : min= 2016, max= 2188, avg=2062.80, stdev=62.81, samples=10 00:31:09.213 lat (msec) : 2=0.13%, 4=86.01%, 10=13.86% 00:31:09.213 cpu : usr=97.02%, sys=2.70%, ctx=7, majf=0, minf=69 00:31:09.213 IO depths : 1=0.1%, 2=0.2%, 4=72.9%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.213 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.213 issued rwts: total=10322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.213 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:09.213 filename1: (groupid=0, jobs=1): err= 0: pid=1356880: Tue Jun 11 09:45:40 2024 00:31:09.213 read: IOPS=2094, BW=16.4MiB/s (17.2MB/s)(81.9MiB/5002msec) 00:31:09.213 slat (nsec): min=8179, max=43832, avg=9832.04, stdev=3484.90 00:31:09.213 clat (usec): min=1934, max=46501, avg=3796.19, stdev=1262.99 00:31:09.213 lat (usec): min=1943, max=46533, avg=3806.02, stdev=1263.03 00:31:09.213 clat percentiles (usec): 00:31:09.213 | 1.00th=[ 3064], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3556], 00:31:09.213 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3720], 60.00th=[ 3785], 00:31:09.213 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 3916], 95.00th=[ 4883], 00:31:09.213 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6325], 99.95th=[46400], 00:31:09.213 | 99.99th=[46400] 00:31:09.213 bw ( KiB/s): min=14733, max=17360, per=25.08%, avg=16698.33, stdev=864.41, samples=9 00:31:09.213 iops : min= 1841, max= 2170, avg=2087.22, stdev=108.23, samples=9 00:31:09.213 lat (msec) : 2=0.02%, 4=91.36%, 10=8.54%, 50=0.08% 00:31:09.213 cpu : usr=96.34%, sys=3.38%, ctx=8, majf=0, minf=79 00:31:09.213 IO depths : 1=0.1%, 2=0.3%, 4=65.8%, 8=33.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.213 complete : 0=0.0%, 4=97.5%, 8=2.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.213 issued rwts: total=10479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.213 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:09.213 filename1: (groupid=0, jobs=1): err= 0: pid=1356881: Tue Jun 11 09:45:40 2024 00:31:09.213 read: IOPS=2097, BW=16.4MiB/s (17.2MB/s)(82.0MiB/5005msec) 00:31:09.213 slat (nsec): min=8176, max=53124, avg=9544.02, stdev=3487.91 00:31:09.213 clat (usec): min=1390, max=6760, avg=3791.01, stdev=508.10 00:31:09.213 lat (usec): min=1399, max=6788, avg=3800.55, stdev=507.95 00:31:09.213 clat percentiles (usec): 00:31:09.213 | 1.00th=[ 2999], 5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3556], 00:31:09.213 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3720], 60.00th=[ 3785], 00:31:09.213 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 3949], 95.00th=[ 5342], 00:31:09.213 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 6128], 99.95th=[ 6259], 00:31:09.213 | 99.99th=[ 6783] 00:31:09.213 bw ( KiB/s): min=15952, max=17408, per=25.22%, avg=16790.40, stdev=615.59, samples=10 00:31:09.213 iops : min= 1994, max= 2176, avg=2098.80, stdev=76.95, samples=10 00:31:09.213 lat (msec) : 2=0.10%, 4=90.17%, 10=9.72% 00:31:09.213 cpu : usr=96.66%, sys=3.08%, ctx=6, majf=0, minf=64 00:31:09.213 IO depths : 1=0.1%, 2=0.2%, 4=67.0%, 8=32.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.213 complete : 0=0.0%, 4=96.7%, 8=3.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.213 issued rwts: total=10500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.213 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:09.213 00:31:09.213 Run status group 0 (all jobs): 00:31:09.213 READ: bw=65.0MiB/s (68.2MB/s), 16.1MiB/s-16.4MiB/s (16.9MB/s-17.2MB/s), io=326MiB (341MB), run=5002-5006msec 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:09.213 00:31:09.213 real 0m24.089s 00:31:09.213 user 5m20.006s 00:31:09.213 sys 0m4.014s 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:09.213 09:45:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:09.213 ************************************ 00:31:09.213 END TEST fio_dif_rand_params 00:31:09.213 ************************************ 00:31:09.213 09:45:40 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:09.213 09:45:40 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:09.213 09:45:40 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:09.213 09:45:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:09.213 ************************************ 00:31:09.213 START TEST fio_dif_digest 00:31:09.213 ************************************ 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:09.213 bdev_null0 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:09.213 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:09.214 [2024-06-11 09:45:40.844136] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:09.214 { 00:31:09.214 "params": { 00:31:09.214 "name": "Nvme$subsystem", 00:31:09.214 "trtype": "$TEST_TRANSPORT", 00:31:09.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:09.214 "adrfam": "ipv4", 00:31:09.214 "trsvcid": "$NVMF_PORT", 00:31:09.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:09.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:09.214 "hdgst": ${hdgst:-false}, 00:31:09.214 "ddgst": ${ddgst:-false} 00:31:09.214 }, 00:31:09.214 "method": "bdev_nvme_attach_controller" 00:31:09.214 } 00:31:09.214 EOF 00:31:09.214 )") 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:09.214 "params": { 00:31:09.214 "name": "Nvme0", 00:31:09.214 "trtype": "tcp", 00:31:09.214 "traddr": "10.0.0.2", 00:31:09.214 "adrfam": "ipv4", 00:31:09.214 "trsvcid": "4420", 00:31:09.214 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:09.214 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:09.214 "hdgst": true, 00:31:09.214 "ddgst": true 00:31:09.214 }, 00:31:09.214 "method": "bdev_nvme_attach_controller" 00:31:09.214 }' 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:09.214 09:45:40 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.473 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:09.473 ... 00:31:09.473 fio-3.35 00:31:09.473 Starting 3 threads 00:31:09.473 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.728 00:31:21.728 filename0: (groupid=0, jobs=1): err= 0: pid=1358204: Tue Jun 11 09:45:51 2024 00:31:21.728 read: IOPS=197, BW=24.6MiB/s (25.8MB/s)(247MiB/10043msec) 00:31:21.728 slat (nsec): min=8528, max=32351, avg=10418.39, stdev=1444.85 00:31:21.728 clat (usec): min=9600, max=57561, avg=15188.94, stdev=5587.43 00:31:21.728 lat (usec): min=9611, max=57571, avg=15199.36, stdev=5587.44 00:31:21.728 clat percentiles (usec): 00:31:21.728 | 1.00th=[10552], 5.00th=[11863], 10.00th=[12649], 20.00th=[13435], 00:31:21.728 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:31:21.728 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16319], 95.00th=[16909], 00:31:21.728 | 99.00th=[55313], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:31:21.728 | 99.99th=[57410] 00:31:21.728 bw ( KiB/s): min=21760, max=27136, per=30.78%, avg=25303.05, stdev=1470.05, samples=20 00:31:21.728 iops : min= 170, max= 212, avg=197.65, stdev=11.49, samples=20 00:31:21.728 lat (msec) : 10=0.15%, 20=98.08%, 100=1.77% 00:31:21.728 cpu : usr=95.59%, sys=4.11%, ctx=21, majf=0, minf=160 00:31:21.728 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.728 issued rwts: total=1979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.728 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:21.728 filename0: (groupid=0, jobs=1): err= 0: pid=1358205: Tue Jun 11 09:45:51 2024 00:31:21.728 read: IOPS=186, BW=23.3MiB/s (24.4MB/s)(234MiB/10047msec) 00:31:21.728 slat (nsec): min=8475, max=33654, avg=9315.77, stdev=941.14 00:31:21.728 clat (usec): min=9739, max=58837, avg=16075.86, stdev=6581.69 00:31:21.728 lat (usec): min=9748, max=58845, avg=16085.18, stdev=6581.68 00:31:21.728 clat percentiles (usec): 00:31:21.728 | 1.00th=[10814], 5.00th=[12518], 10.00th=[13304], 20.00th=[13960], 00:31:21.728 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15139], 60.00th=[15533], 00:31:21.728 | 70.00th=[15795], 80.00th=[16319], 90.00th=[17171], 95.00th=[17695], 00:31:21.728 | 99.00th=[56361], 99.50th=[57410], 99.90th=[58459], 99.95th=[58983], 00:31:21.728 | 99.99th=[58983] 00:31:21.728 bw ( KiB/s): min=20736, max=26368, per=29.10%, avg=23923.20, stdev=1841.32, samples=20 00:31:21.728 iops : min= 162, max= 206, avg=186.90, stdev=14.39, samples=20 00:31:21.728 lat (msec) : 10=0.21%, 20=97.06%, 50=0.27%, 100=2.46% 00:31:21.728 cpu : usr=95.77%, sys=3.98%, ctx=20, majf=0, minf=121 00:31:21.728 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.728 issued rwts: total=1871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.728 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:21.728 filename0: (groupid=0, jobs=1): err= 0: pid=1358206: Tue Jun 11 09:45:51 2024 00:31:21.728 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(325MiB/10043msec) 00:31:21.728 slat (nsec): min=8520, max=31925, avg=9363.68, stdev=805.58 00:31:21.728 clat (usec): min=6788, max=48607, avg=11547.44, stdev=1751.37 00:31:21.728 lat (usec): min=6797, max=48616, avg=11556.81, stdev=1751.42 00:31:21.728 clat percentiles (usec): 00:31:21.728 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10683], 00:31:21.728 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:31:21.728 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13042], 95.00th=[13435], 00:31:21.728 | 99.00th=[14091], 99.50th=[14615], 99.90th=[17957], 99.95th=[47449], 00:31:21.728 | 99.99th=[48497] 00:31:21.728 bw ( KiB/s): min=31488, max=35584, per=40.50%, avg=33292.80, stdev=1042.28, samples=20 00:31:21.728 iops : min= 246, max= 278, avg=260.10, stdev= 8.14, samples=20 00:31:21.728 lat (msec) : 10=14.25%, 20=85.67%, 50=0.08% 00:31:21.728 cpu : usr=96.26%, sys=3.46%, ctx=62, majf=0, minf=99 00:31:21.729 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:21.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:21.729 issued rwts: total=2603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:21.729 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:21.729 00:31:21.729 Run status group 0 (all jobs): 00:31:21.729 READ: bw=80.3MiB/s (84.2MB/s), 23.3MiB/s-32.4MiB/s (24.4MB/s-34.0MB/s), io=807MiB (846MB), run=10043-10047msec 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:21.729 00:31:21.729 real 0m11.108s 00:31:21.729 user 0m43.424s 00:31:21.729 sys 0m1.445s 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:21.729 09:45:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:21.729 ************************************ 00:31:21.729 END TEST fio_dif_digest 00:31:21.729 ************************************ 00:31:21.729 09:45:51 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:21.729 09:45:51 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:21.729 09:45:51 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:21.729 09:45:51 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:21.729 09:45:51 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:21.729 09:45:51 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:21.729 09:45:51 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:21.729 09:45:51 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:21.729 rmmod nvme_tcp 00:31:21.729 rmmod nvme_fabrics 00:31:21.729 rmmod nvme_keyring 00:31:21.729 09:45:52 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:21.729 09:45:52 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:21.729 09:45:52 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:21.729 09:45:52 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1347363 ']' 00:31:21.729 09:45:52 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1347363 00:31:21.729 09:45:52 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 1347363 ']' 00:31:21.729 09:45:52 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 1347363 00:31:21.729 09:45:52 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:31:21.729 09:45:52 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:21.729 09:45:52 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1347363 00:31:21.729 09:45:52 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:21.729 09:45:52 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:21.729 09:45:52 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1347363' 00:31:21.729 killing process with pid 1347363 00:31:21.729 09:45:52 nvmf_dif -- common/autotest_common.sh@968 -- # kill 1347363 00:31:21.729 09:45:52 nvmf_dif -- common/autotest_common.sh@973 -- # wait 1347363 00:31:21.729 09:45:52 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:21.729 09:45:52 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:23.641 Waiting for block devices as requested 00:31:23.641 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:23.900 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:23.900 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:23.900 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:23.900 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:24.163 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:24.163 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:24.163 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:24.423 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:24.423 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:24.423 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:24.683 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:24.683 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:24.683 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:24.945 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:24.945 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:24.945 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:24.945 09:45:56 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:24.945 09:45:56 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:24.945 09:45:56 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:24.945 09:45:56 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:24.945 09:45:56 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.945 09:45:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:24.945 09:45:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.487 09:45:58 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:27.487 00:31:27.487 real 1m16.618s 00:31:27.487 user 8m1.141s 00:31:27.487 sys 0m19.434s 00:31:27.487 09:45:58 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:27.487 09:45:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:27.487 ************************************ 00:31:27.487 END TEST nvmf_dif 00:31:27.487 ************************************ 00:31:27.487 09:45:58 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:27.487 09:45:58 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:27.487 09:45:58 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:27.487 09:45:58 -- common/autotest_common.sh@10 -- # set +x 00:31:27.487 ************************************ 00:31:27.487 START TEST nvmf_abort_qd_sizes 00:31:27.487 ************************************ 00:31:27.487 09:45:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:27.487 * Looking for test storage... 00:31:27.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:27.487 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:27.488 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:27.488 09:45:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:27.488 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:27.488 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.488 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:27.488 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:27.488 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:27.488 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.488 09:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:27.488 09:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.488 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:27.488 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:27.488 09:45:59 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:27.488 09:45:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:34.066 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:34.067 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:34.067 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:34.067 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:34.067 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:34.327 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:34.327 09:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:34.327 09:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:34.327 09:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:34.327 09:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:34.327 09:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:34.587 09:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:34.587 09:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:34.587 09:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:34.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:31:34.587 00:31:34.587 --- 10.0.0.2 ping statistics --- 00:31:34.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.587 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:31:34.587 09:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:34.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:31:34.587 00:31:34.587 --- 10.0.0.1 ping statistics --- 00:31:34.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.587 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:31:34.587 09:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.587 09:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:34.587 09:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:34.587 09:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:37.884 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:37.884 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1367493 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1367493 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 1367493 ']' 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:38.144 09:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:38.144 [2024-06-11 09:46:09.884560] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:31:38.144 [2024-06-11 09:46:09.884620] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.144 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.404 [2024-06-11 09:46:09.970400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:38.404 [2024-06-11 09:46:10.073548] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:38.404 [2024-06-11 09:46:10.073609] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:38.404 [2024-06-11 09:46:10.073617] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:38.404 [2024-06-11 09:46:10.073624] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:38.404 [2024-06-11 09:46:10.073631] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:38.404 [2024-06-11 09:46:10.073759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.404 [2024-06-11 09:46:10.073891] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:38.404 [2024-06-11 09:46:10.074060] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.404 [2024-06-11 09:46:10.074061] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:38.973 09:46:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:38.973 09:46:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:31:38.973 09:46:10 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:38.973 09:46:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:38.973 09:46:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:39.233 09:46:10 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.233 09:46:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:39.233 09:46:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:39.233 09:46:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:39.233 09:46:10 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:39.233 09:46:10 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:39.233 09:46:10 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:39.233 09:46:10 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:39.233 09:46:10 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:39.234 09:46:10 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:39.234 09:46:10 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:39.234 09:46:10 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:39.234 09:46:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:39.234 09:46:10 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:39.234 09:46:10 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:39.234 09:46:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:39.234 09:46:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:39.234 09:46:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:39.234 09:46:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:39.234 09:46:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:39.234 09:46:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:39.234 ************************************ 00:31:39.234 START TEST spdk_target_abort 00:31:39.234 ************************************ 00:31:39.234 09:46:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:31:39.234 09:46:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:39.234 09:46:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:39.234 09:46:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:39.234 09:46:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.494 spdk_targetn1 00:31:39.494 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:39.494 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:39.494 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:39.494 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.494 [2024-06-11 09:46:11.164150] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.494 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:39.494 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.495 [2024-06-11 09:46:11.204425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:39.495 09:46:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:39.495 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.756 [2024-06-11 09:46:11.338756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:472 len:8 PRP1 0x2000078be000 PRP2 0x0 00:31:39.756 [2024-06-11 09:46:11.338783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:003d p:1 m:0 dnr:0 00:31:39.756 [2024-06-11 09:46:11.348336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:840 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:31:39.756 [2024-06-11 09:46:11.348353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:006a p:1 m:0 dnr:0 00:31:39.756 [2024-06-11 09:46:11.385784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2072 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:31:39.756 [2024-06-11 09:46:11.385802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:39.756 [2024-06-11 09:46:11.392292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2240 len:8 PRP1 0x2000078be000 PRP2 0x0 00:31:39.756 [2024-06-11 09:46:11.392306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:39.756 [2024-06-11 09:46:11.403777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2720 len:8 PRP1 0x2000078be000 PRP2 0x0 00:31:39.756 [2024-06-11 09:46:11.403793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:39.757 [2024-06-11 09:46:11.407699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2784 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:31:39.757 [2024-06-11 09:46:11.407711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:39.757 [2024-06-11 09:46:11.414811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3000 len:8 PRP1 0x2000078be000 PRP2 0x0 00:31:39.757 [2024-06-11 09:46:11.414826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:43.055 Initializing NVMe Controllers 00:31:43.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:43.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:43.055 Initialization complete. Launching workers. 00:31:43.055 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11928, failed: 7 00:31:43.055 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3238, failed to submit 8697 00:31:43.055 success 739, unsuccess 2499, failed 0 00:31:43.055 09:46:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:43.055 09:46:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:43.055 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.055 [2024-06-11 09:46:14.686406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:2128 len:8 PRP1 0x200007c48000 PRP2 0x0 00:31:43.055 [2024-06-11 09:46:14.686446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:46.353 Initializing NVMe Controllers 00:31:46.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:46.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:46.354 Initialization complete. Launching workers. 00:31:46.354 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8613, failed: 1 00:31:46.354 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1218, failed to submit 7396 00:31:46.354 success 325, unsuccess 893, failed 0 00:31:46.354 09:46:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:46.354 09:46:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:46.354 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.952 [2024-06-11 09:46:18.455068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:146 nsid:1 lba:67912 len:8 PRP1 0x20000790a000 PRP2 0x0 00:31:46.952 [2024-06-11 09:46:18.455098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:146 cdw0:0 sqhd:00cb p:0 m:0 dnr:0 00:31:48.865 [2024-06-11 09:46:20.675154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:169 nsid:1 lba:318488 len:8 PRP1 0x2000078d6000 PRP2 0x0 00:31:48.865 [2024-06-11 09:46:20.675190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:169 cdw0:0 sqhd:0028 p:1 m:0 dnr:0 00:31:49.126 Initializing NVMe Controllers 00:31:49.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:49.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:49.126 Initialization complete. Launching workers. 00:31:49.126 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42320, failed: 2 00:31:49.126 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2602, failed to submit 39720 00:31:49.126 success 622, unsuccess 1980, failed 0 00:31:49.126 09:46:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:49.126 09:46:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.126 09:46:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:49.126 09:46:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:49.126 09:46:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:49.126 09:46:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:49.126 09:46:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:51.035 09:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:51.035 09:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1367493 00:31:51.035 09:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 1367493 ']' 00:31:51.035 09:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 1367493 00:31:51.035 09:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:31:51.035 09:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:51.035 09:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1367493 00:31:51.035 09:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:51.035 09:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:51.035 09:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1367493' 00:31:51.035 killing process with pid 1367493 00:31:51.035 09:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 1367493 00:31:51.035 09:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 1367493 00:31:51.295 00:31:51.295 real 0m12.071s 00:31:51.295 user 0m49.295s 00:31:51.295 sys 0m1.900s 00:31:51.295 09:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:51.295 09:46:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:51.295 ************************************ 00:31:51.295 END TEST spdk_target_abort 00:31:51.295 ************************************ 00:31:51.295 09:46:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:51.295 09:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:51.295 09:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:51.295 09:46:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:51.295 ************************************ 00:31:51.295 START TEST kernel_target_abort 00:31:51.295 ************************************ 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:51.295 09:46:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:54.593 Waiting for block devices as requested 00:31:54.593 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:54.593 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:54.593 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:54.593 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:54.593 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:54.593 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:54.593 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:54.854 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:54.854 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:54.855 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:55.116 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:55.116 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:55.116 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:55.378 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:55.378 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:55.378 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:55.639 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:55.639 No valid GPT data, bailing 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:31:55.639 00:31:55.639 Discovery Log Number of Records 2, Generation counter 2 00:31:55.639 =====Discovery Log Entry 0====== 00:31:55.639 trtype: tcp 00:31:55.639 adrfam: ipv4 00:31:55.639 subtype: current discovery subsystem 00:31:55.639 treq: not specified, sq flow control disable supported 00:31:55.639 portid: 1 00:31:55.639 trsvcid: 4420 00:31:55.639 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:55.639 traddr: 10.0.0.1 00:31:55.639 eflags: none 00:31:55.639 sectype: none 00:31:55.639 =====Discovery Log Entry 1====== 00:31:55.639 trtype: tcp 00:31:55.639 adrfam: ipv4 00:31:55.639 subtype: nvme subsystem 00:31:55.639 treq: not specified, sq flow control disable supported 00:31:55.639 portid: 1 00:31:55.639 trsvcid: 4420 00:31:55.639 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:55.639 traddr: 10.0.0.1 00:31:55.639 eflags: none 00:31:55.639 sectype: none 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:55.639 09:46:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:55.639 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.945 Initializing NVMe Controllers 00:31:58.945 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:58.945 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:58.945 Initialization complete. Launching workers. 00:31:58.945 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54620, failed: 0 00:31:58.945 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 54620, failed to submit 0 00:31:58.945 success 0, unsuccess 54620, failed 0 00:31:58.945 09:46:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:58.945 09:46:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:58.945 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.245 Initializing NVMe Controllers 00:32:02.245 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:02.245 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:02.245 Initialization complete. Launching workers. 00:32:02.245 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94334, failed: 0 00:32:02.245 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23774, failed to submit 70560 00:32:02.245 success 0, unsuccess 23774, failed 0 00:32:02.245 09:46:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:02.245 09:46:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:02.245 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.540 Initializing NVMe Controllers 00:32:05.540 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:05.540 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:05.540 Initialization complete. Launching workers. 00:32:05.540 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92917, failed: 0 00:32:05.540 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23242, failed to submit 69675 00:32:05.540 success 0, unsuccess 23242, failed 0 00:32:05.540 09:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:05.540 09:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:05.540 09:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:05.540 09:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:05.540 09:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:05.540 09:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:05.540 09:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:05.540 09:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:05.541 09:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:05.541 09:46:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:08.841 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:08.841 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:10.224 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:10.224 00:32:10.224 real 0m18.931s 00:32:10.224 user 0m8.238s 00:32:10.224 sys 0m5.562s 00:32:10.224 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:10.224 09:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:10.224 ************************************ 00:32:10.224 END TEST kernel_target_abort 00:32:10.224 ************************************ 00:32:10.224 09:46:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:10.224 09:46:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:10.224 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:10.224 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:10.224 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:10.224 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:10.224 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:10.224 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:10.224 rmmod nvme_tcp 00:32:10.224 rmmod nvme_fabrics 00:32:10.224 rmmod nvme_keyring 00:32:10.487 09:46:42 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:10.487 09:46:42 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:10.487 09:46:42 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:10.487 09:46:42 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1367493 ']' 00:32:10.487 09:46:42 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1367493 00:32:10.487 09:46:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 1367493 ']' 00:32:10.487 09:46:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 1367493 00:32:10.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1367493) - No such process 00:32:10.487 09:46:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 1367493 is not found' 00:32:10.487 Process with pid 1367493 is not found 00:32:10.487 09:46:42 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:10.487 09:46:42 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:13.963 Waiting for block devices as requested 00:32:13.963 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:13.963 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:13.963 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:13.963 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:13.963 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:14.223 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:14.223 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:14.223 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:14.484 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:14.484 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:14.484 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:14.744 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:14.744 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:14.744 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:15.006 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:15.006 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:15.006 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:15.267 09:46:46 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:15.267 09:46:46 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:15.267 09:46:46 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:15.267 09:46:46 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:15.267 09:46:46 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.267 09:46:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:15.267 09:46:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.180 09:46:48 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:17.180 00:32:17.180 real 0m49.986s 00:32:17.180 user 1m2.617s 00:32:17.180 sys 0m17.910s 00:32:17.180 09:46:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:17.180 09:46:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:17.180 ************************************ 00:32:17.180 END TEST nvmf_abort_qd_sizes 00:32:17.180 ************************************ 00:32:17.180 09:46:48 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:17.180 09:46:48 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:17.180 09:46:48 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:17.180 09:46:48 -- common/autotest_common.sh@10 -- # set +x 00:32:17.180 ************************************ 00:32:17.180 START TEST keyring_file 00:32:17.180 ************************************ 00:32:17.180 09:46:48 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:17.441 * Looking for test storage... 00:32:17.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:17.441 09:46:49 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:17.441 09:46:49 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:17.441 09:46:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:17.441 09:46:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.441 09:46:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.441 09:46:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.441 09:46:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.441 09:46:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.441 09:46:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.441 09:46:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.441 09:46:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.441 09:46:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.441 09:46:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.442 09:46:49 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.442 09:46:49 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.442 09:46:49 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.442 09:46:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.442 09:46:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.442 09:46:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.442 09:46:49 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:17.442 09:46:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:17.442 09:46:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:17.442 09:46:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:17.442 09:46:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:17.442 09:46:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:17.442 09:46:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:17.442 09:46:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bLdy53elHQ 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bLdy53elHQ 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bLdy53elHQ 00:32:17.442 09:46:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.bLdy53elHQ 00:32:17.442 09:46:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.awQCaji1ok 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:17.442 09:46:49 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.awQCaji1ok 00:32:17.442 09:46:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.awQCaji1ok 00:32:17.442 09:46:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.awQCaji1ok 00:32:17.442 09:46:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=1377386 00:32:17.442 09:46:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1377386 00:32:17.442 09:46:49 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:17.442 09:46:49 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1377386 ']' 00:32:17.442 09:46:49 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.442 09:46:49 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:17.442 09:46:49 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.442 09:46:49 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:17.442 09:46:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:17.703 [2024-06-11 09:46:49.258169] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:32:17.703 [2024-06-11 09:46:49.258236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377386 ] 00:32:17.703 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.703 [2024-06-11 09:46:49.338305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.703 [2024-06-11 09:46:49.434776] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.274 09:46:50 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:18.274 09:46:50 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:32:18.274 09:46:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:18.274 09:46:50 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.274 09:46:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:18.274 [2024-06-11 09:46:50.085826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.535 null0 00:32:18.535 [2024-06-11 09:46:50.117872] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:18.535 [2024-06-11 09:46:50.118149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:18.535 [2024-06-11 09:46:50.125889] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:18.535 09:46:50 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:18.535 [2024-06-11 09:46:50.137920] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:18.535 request: 00:32:18.535 { 00:32:18.535 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:18.535 "secure_channel": false, 00:32:18.535 "listen_address": { 00:32:18.535 "trtype": "tcp", 00:32:18.535 "traddr": "127.0.0.1", 00:32:18.535 "trsvcid": "4420" 00:32:18.535 }, 00:32:18.535 "method": "nvmf_subsystem_add_listener", 00:32:18.535 "req_id": 1 00:32:18.535 } 00:32:18.535 Got JSON-RPC error response 00:32:18.535 response: 00:32:18.535 { 00:32:18.535 "code": -32602, 00:32:18.535 "message": "Invalid parameters" 00:32:18.535 } 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:18.535 09:46:50 keyring_file -- keyring/file.sh@46 -- # bperfpid=1377682 00:32:18.535 09:46:50 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1377682 /var/tmp/bperf.sock 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1377682 ']' 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:18.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:18.535 09:46:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:18.535 09:46:50 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:18.535 [2024-06-11 09:46:50.194012] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:32:18.536 [2024-06-11 09:46:50.194061] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377682 ] 00:32:18.536 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.536 [2024-06-11 09:46:50.252078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.536 [2024-06-11 09:46:50.318223] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.796 09:46:50 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:18.796 09:46:50 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:32:18.796 09:46:50 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bLdy53elHQ 00:32:18.796 09:46:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bLdy53elHQ 00:32:18.796 09:46:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.awQCaji1ok 00:32:18.796 09:46:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.awQCaji1ok 00:32:19.057 09:46:50 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:19.057 09:46:50 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:19.057 09:46:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.057 09:46:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:19.057 09:46:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.318 09:46:51 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.bLdy53elHQ == \/\t\m\p\/\t\m\p\.\b\L\d\y\5\3\e\l\H\Q ]] 00:32:19.318 09:46:51 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:19.318 09:46:51 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:19.318 09:46:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.318 09:46:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.318 09:46:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:19.578 09:46:51 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.awQCaji1ok == \/\t\m\p\/\t\m\p\.\a\w\Q\C\a\j\i\1\o\k ]] 00:32:19.578 09:46:51 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:19.578 09:46:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:19.578 09:46:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.578 09:46:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.578 09:46:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.578 09:46:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:19.578 09:46:51 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:19.838 09:46:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:19.838 09:46:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:19.838 09:46:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.838 09:46:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.838 09:46:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.838 09:46:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:19.838 09:46:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:19.838 09:46:51 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:19.838 09:46:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:20.098 [2024-06-11 09:46:51.790332] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:20.098 nvme0n1 00:32:20.098 09:46:51 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:20.098 09:46:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:20.098 09:46:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:20.098 09:46:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:20.098 09:46:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.098 09:46:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:20.358 09:46:52 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:20.359 09:46:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:20.359 09:46:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:20.359 09:46:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:20.359 09:46:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:20.359 09:46:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.359 09:46:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:20.618 09:46:52 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:20.618 09:46:52 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:20.618 Running I/O for 1 seconds... 00:32:22.002 00:32:22.002 Latency(us) 00:32:22.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.002 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:22.002 nvme0n1 : 1.01 9707.45 37.92 0.00 0.00 13100.25 6580.91 20316.16 00:32:22.002 =================================================================================================================== 00:32:22.002 Total : 9707.45 37.92 0.00 0.00 13100.25 6580.91 20316.16 00:32:22.002 0 00:32:22.002 09:46:53 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:22.002 09:46:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:22.002 09:46:53 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:22.002 09:46:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:22.002 09:46:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.002 09:46:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.002 09:46:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:22.002 09:46:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.262 09:46:53 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:22.262 09:46:53 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:22.262 09:46:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:22.262 09:46:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.262 09:46:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.262 09:46:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:22.262 09:46:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.523 09:46:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:22.523 09:46:54 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:22.523 09:46:54 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:22.523 09:46:54 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:22.523 09:46:54 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:22.523 09:46:54 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:22.523 09:46:54 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:22.523 09:46:54 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:22.523 09:46:54 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:22.523 09:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:22.523 [2024-06-11 09:46:54.275147] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:22.523 [2024-06-11 09:46:54.275664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e413b0 (107): Transport endpoint is not connected 00:32:22.523 [2024-06-11 09:46:54.276659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e413b0 (9): Bad file descriptor 00:32:22.523 [2024-06-11 09:46:54.277660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:22.523 [2024-06-11 09:46:54.277668] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:22.523 [2024-06-11 09:46:54.277675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:22.523 request: 00:32:22.523 { 00:32:22.523 "name": "nvme0", 00:32:22.523 "trtype": "tcp", 00:32:22.523 "traddr": "127.0.0.1", 00:32:22.523 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:22.523 "adrfam": "ipv4", 00:32:22.523 "trsvcid": "4420", 00:32:22.523 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.523 "psk": "key1", 00:32:22.523 "method": "bdev_nvme_attach_controller", 00:32:22.523 "req_id": 1 00:32:22.523 } 00:32:22.523 Got JSON-RPC error response 00:32:22.523 response: 00:32:22.523 { 00:32:22.523 "code": -5, 00:32:22.523 "message": "Input/output error" 00:32:22.523 } 00:32:22.523 09:46:54 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:22.523 09:46:54 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:22.523 09:46:54 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:22.523 09:46:54 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:22.523 09:46:54 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:22.523 09:46:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:22.523 09:46:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.523 09:46:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.523 09:46:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:22.523 09:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.783 09:46:54 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:22.783 09:46:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:22.783 09:46:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:22.783 09:46:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.783 09:46:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.783 09:46:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:22.783 09:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.044 09:46:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:23.044 09:46:54 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:23.044 09:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:23.304 09:46:54 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:23.304 09:46:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:23.564 09:46:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:23.564 09:46:55 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:23.564 09:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:23.564 09:46:55 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:23.564 09:46:55 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.bLdy53elHQ 00:32:23.564 09:46:55 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.bLdy53elHQ 00:32:23.564 09:46:55 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:23.564 09:46:55 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.bLdy53elHQ 00:32:23.564 09:46:55 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:23.564 09:46:55 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:23.564 09:46:55 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:23.564 09:46:55 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:23.564 09:46:55 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bLdy53elHQ 00:32:23.564 09:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bLdy53elHQ 00:32:23.824 [2024-06-11 09:46:55.521085] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.bLdy53elHQ': 0100660 00:32:23.824 [2024-06-11 09:46:55.521106] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:23.824 request: 00:32:23.824 { 00:32:23.824 "name": "key0", 00:32:23.824 "path": "/tmp/tmp.bLdy53elHQ", 00:32:23.824 "method": "keyring_file_add_key", 00:32:23.824 "req_id": 1 00:32:23.824 } 00:32:23.824 Got JSON-RPC error response 00:32:23.824 response: 00:32:23.824 { 00:32:23.824 "code": -1, 00:32:23.824 "message": "Operation not permitted" 00:32:23.824 } 00:32:23.824 09:46:55 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:23.824 09:46:55 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:23.824 09:46:55 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:23.824 09:46:55 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:23.824 09:46:55 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.bLdy53elHQ 00:32:23.824 09:46:55 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bLdy53elHQ 00:32:23.824 09:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bLdy53elHQ 00:32:24.084 09:46:55 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.bLdy53elHQ 00:32:24.084 09:46:55 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:24.084 09:46:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:24.084 09:46:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:24.084 09:46:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.084 09:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.084 09:46:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:24.344 09:46:55 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:24.344 09:46:55 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:24.344 09:46:55 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:24.344 09:46:55 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:24.344 09:46:55 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:24.344 09:46:55 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:24.344 09:46:55 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:24.344 09:46:55 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:24.344 09:46:55 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:24.344 09:46:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:24.344 [2024-06-11 09:46:56.110596] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.bLdy53elHQ': No such file or directory 00:32:24.344 [2024-06-11 09:46:56.110613] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:24.344 [2024-06-11 09:46:56.110635] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:24.344 [2024-06-11 09:46:56.110641] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:24.344 [2024-06-11 09:46:56.110648] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:24.344 request: 00:32:24.344 { 00:32:24.344 "name": "nvme0", 00:32:24.344 "trtype": "tcp", 00:32:24.344 "traddr": "127.0.0.1", 00:32:24.344 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:24.344 "adrfam": "ipv4", 00:32:24.344 "trsvcid": "4420", 00:32:24.344 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:24.344 "psk": "key0", 00:32:24.344 "method": "bdev_nvme_attach_controller", 00:32:24.344 "req_id": 1 00:32:24.344 } 00:32:24.344 Got JSON-RPC error response 00:32:24.344 response: 00:32:24.344 { 00:32:24.344 "code": -19, 00:32:24.344 "message": "No such device" 00:32:24.344 } 00:32:24.344 09:46:56 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:24.344 09:46:56 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:24.344 09:46:56 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:24.344 09:46:56 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:24.344 09:46:56 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:24.344 09:46:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:24.605 09:46:56 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:24.605 09:46:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:24.605 09:46:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:24.605 09:46:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:24.605 09:46:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:24.605 09:46:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:24.605 09:46:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DTWfyc5IKJ 00:32:24.605 09:46:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:24.605 09:46:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:24.605 09:46:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:24.606 09:46:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:24.606 09:46:56 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:24.606 09:46:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:24.606 09:46:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:24.606 09:46:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DTWfyc5IKJ 00:32:24.606 09:46:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DTWfyc5IKJ 00:32:24.606 09:46:56 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.DTWfyc5IKJ 00:32:24.606 09:46:56 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DTWfyc5IKJ 00:32:24.606 09:46:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DTWfyc5IKJ 00:32:24.867 09:46:56 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:24.867 09:46:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:25.127 nvme0n1 00:32:25.127 09:46:56 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:25.127 09:46:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:25.127 09:46:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:25.127 09:46:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:25.127 09:46:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:25.127 09:46:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:25.387 09:46:57 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:25.387 09:46:57 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:25.387 09:46:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:25.647 09:46:57 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:25.647 09:46:57 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:25.647 09:46:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:25.647 09:46:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:25.647 09:46:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:25.907 09:46:57 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:25.907 09:46:57 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:25.907 09:46:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:25.907 09:46:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:25.907 09:46:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:25.907 09:46:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:25.907 09:46:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:25.907 09:46:57 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:25.907 09:46:57 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:25.907 09:46:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:26.165 09:46:57 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:26.166 09:46:57 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:26.166 09:46:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.425 09:46:58 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:26.425 09:46:58 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DTWfyc5IKJ 00:32:26.425 09:46:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DTWfyc5IKJ 00:32:26.685 09:46:58 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.awQCaji1ok 00:32:26.685 09:46:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.awQCaji1ok 00:32:26.944 09:46:58 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:26.944 09:46:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:27.204 nvme0n1 00:32:27.204 09:46:58 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:27.204 09:46:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:27.465 09:46:59 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:27.465 "subsystems": [ 00:32:27.465 { 00:32:27.465 "subsystem": "keyring", 00:32:27.465 "config": [ 00:32:27.465 { 00:32:27.465 "method": "keyring_file_add_key", 00:32:27.465 "params": { 00:32:27.465 "name": "key0", 00:32:27.465 "path": "/tmp/tmp.DTWfyc5IKJ" 00:32:27.465 } 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "method": "keyring_file_add_key", 00:32:27.465 "params": { 00:32:27.465 "name": "key1", 00:32:27.465 "path": "/tmp/tmp.awQCaji1ok" 00:32:27.465 } 00:32:27.465 } 00:32:27.465 ] 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "subsystem": "iobuf", 00:32:27.465 "config": [ 00:32:27.465 { 00:32:27.465 "method": "iobuf_set_options", 00:32:27.465 "params": { 00:32:27.465 "small_pool_count": 8192, 00:32:27.465 "large_pool_count": 1024, 00:32:27.465 "small_bufsize": 8192, 00:32:27.465 "large_bufsize": 135168 00:32:27.465 } 00:32:27.465 } 00:32:27.465 ] 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "subsystem": "sock", 00:32:27.465 "config": [ 00:32:27.465 { 00:32:27.465 "method": "sock_set_default_impl", 00:32:27.465 "params": { 00:32:27.465 "impl_name": "posix" 00:32:27.465 } 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "method": "sock_impl_set_options", 00:32:27.465 "params": { 00:32:27.465 "impl_name": "ssl", 00:32:27.465 "recv_buf_size": 4096, 00:32:27.465 "send_buf_size": 4096, 00:32:27.465 "enable_recv_pipe": true, 00:32:27.465 "enable_quickack": false, 00:32:27.465 "enable_placement_id": 0, 00:32:27.465 "enable_zerocopy_send_server": true, 00:32:27.465 "enable_zerocopy_send_client": false, 00:32:27.465 "zerocopy_threshold": 0, 00:32:27.465 "tls_version": 0, 00:32:27.465 "enable_ktls": false 00:32:27.465 } 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "method": "sock_impl_set_options", 00:32:27.465 "params": { 00:32:27.465 "impl_name": "posix", 00:32:27.465 "recv_buf_size": 2097152, 00:32:27.465 "send_buf_size": 2097152, 00:32:27.465 "enable_recv_pipe": true, 00:32:27.465 "enable_quickack": false, 00:32:27.465 "enable_placement_id": 0, 00:32:27.465 "enable_zerocopy_send_server": true, 00:32:27.465 "enable_zerocopy_send_client": false, 00:32:27.465 "zerocopy_threshold": 0, 00:32:27.465 "tls_version": 0, 00:32:27.465 "enable_ktls": false 00:32:27.465 } 00:32:27.465 } 00:32:27.465 ] 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "subsystem": "vmd", 00:32:27.465 "config": [] 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "subsystem": "accel", 00:32:27.465 "config": [ 00:32:27.465 { 00:32:27.465 "method": "accel_set_options", 00:32:27.465 "params": { 00:32:27.465 "small_cache_size": 128, 00:32:27.465 "large_cache_size": 16, 00:32:27.465 "task_count": 2048, 00:32:27.465 "sequence_count": 2048, 00:32:27.465 "buf_count": 2048 00:32:27.465 } 00:32:27.465 } 00:32:27.465 ] 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "subsystem": "bdev", 00:32:27.465 "config": [ 00:32:27.465 { 00:32:27.465 "method": "bdev_set_options", 00:32:27.465 "params": { 00:32:27.465 "bdev_io_pool_size": 65535, 00:32:27.465 "bdev_io_cache_size": 256, 00:32:27.465 "bdev_auto_examine": true, 00:32:27.465 "iobuf_small_cache_size": 128, 00:32:27.465 "iobuf_large_cache_size": 16 00:32:27.465 } 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "method": "bdev_raid_set_options", 00:32:27.465 "params": { 00:32:27.465 "process_window_size_kb": 1024 00:32:27.465 } 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "method": "bdev_iscsi_set_options", 00:32:27.465 "params": { 00:32:27.465 "timeout_sec": 30 00:32:27.465 } 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "method": "bdev_nvme_set_options", 00:32:27.465 "params": { 00:32:27.465 "action_on_timeout": "none", 00:32:27.465 "timeout_us": 0, 00:32:27.465 "timeout_admin_us": 0, 00:32:27.465 "keep_alive_timeout_ms": 10000, 00:32:27.465 "arbitration_burst": 0, 00:32:27.465 "low_priority_weight": 0, 00:32:27.465 "medium_priority_weight": 0, 00:32:27.465 "high_priority_weight": 0, 00:32:27.465 "nvme_adminq_poll_period_us": 10000, 00:32:27.465 "nvme_ioq_poll_period_us": 0, 00:32:27.465 "io_queue_requests": 512, 00:32:27.465 "delay_cmd_submit": true, 00:32:27.465 "transport_retry_count": 4, 00:32:27.465 "bdev_retry_count": 3, 00:32:27.465 "transport_ack_timeout": 0, 00:32:27.465 "ctrlr_loss_timeout_sec": 0, 00:32:27.465 "reconnect_delay_sec": 0, 00:32:27.465 "fast_io_fail_timeout_sec": 0, 00:32:27.465 "disable_auto_failback": false, 00:32:27.465 "generate_uuids": false, 00:32:27.465 "transport_tos": 0, 00:32:27.465 "nvme_error_stat": false, 00:32:27.465 "rdma_srq_size": 0, 00:32:27.465 "io_path_stat": false, 00:32:27.465 "allow_accel_sequence": false, 00:32:27.465 "rdma_max_cq_size": 0, 00:32:27.465 "rdma_cm_event_timeout_ms": 0, 00:32:27.465 "dhchap_digests": [ 00:32:27.465 "sha256", 00:32:27.465 "sha384", 00:32:27.465 "sha512" 00:32:27.465 ], 00:32:27.465 "dhchap_dhgroups": [ 00:32:27.465 "null", 00:32:27.465 "ffdhe2048", 00:32:27.465 "ffdhe3072", 00:32:27.465 "ffdhe4096", 00:32:27.465 "ffdhe6144", 00:32:27.465 "ffdhe8192" 00:32:27.465 ] 00:32:27.465 } 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "method": "bdev_nvme_attach_controller", 00:32:27.465 "params": { 00:32:27.465 "name": "nvme0", 00:32:27.465 "trtype": "TCP", 00:32:27.465 "adrfam": "IPv4", 00:32:27.465 "traddr": "127.0.0.1", 00:32:27.465 "trsvcid": "4420", 00:32:27.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:27.465 "prchk_reftag": false, 00:32:27.465 "prchk_guard": false, 00:32:27.465 "ctrlr_loss_timeout_sec": 0, 00:32:27.465 "reconnect_delay_sec": 0, 00:32:27.465 "fast_io_fail_timeout_sec": 0, 00:32:27.465 "psk": "key0", 00:32:27.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:27.465 "hdgst": false, 00:32:27.465 "ddgst": false 00:32:27.465 } 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "method": "bdev_nvme_set_hotplug", 00:32:27.465 "params": { 00:32:27.465 "period_us": 100000, 00:32:27.465 "enable": false 00:32:27.465 } 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "method": "bdev_wait_for_examine" 00:32:27.465 } 00:32:27.465 ] 00:32:27.465 }, 00:32:27.465 { 00:32:27.465 "subsystem": "nbd", 00:32:27.465 "config": [] 00:32:27.465 } 00:32:27.465 ] 00:32:27.465 }' 00:32:27.465 09:46:59 keyring_file -- keyring/file.sh@114 -- # killprocess 1377682 00:32:27.465 09:46:59 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1377682 ']' 00:32:27.465 09:46:59 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1377682 00:32:27.465 09:46:59 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:27.465 09:46:59 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:27.465 09:46:59 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1377682 00:32:27.465 09:46:59 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:27.465 09:46:59 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:27.465 09:46:59 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1377682' 00:32:27.465 killing process with pid 1377682 00:32:27.465 09:46:59 keyring_file -- common/autotest_common.sh@968 -- # kill 1377682 00:32:27.465 Received shutdown signal, test time was about 1.000000 seconds 00:32:27.465 00:32:27.465 Latency(us) 00:32:27.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.465 =================================================================================================================== 00:32:27.465 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:27.465 09:46:59 keyring_file -- common/autotest_common.sh@973 -- # wait 1377682 00:32:27.726 09:46:59 keyring_file -- keyring/file.sh@117 -- # bperfpid=1379493 00:32:27.726 09:46:59 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1379493 /var/tmp/bperf.sock 00:32:27.726 09:46:59 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1379493 ']' 00:32:27.726 09:46:59 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:27.726 09:46:59 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:27.726 09:46:59 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:27.726 09:46:59 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:27.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:27.726 09:46:59 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:27.726 09:46:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:27.726 09:46:59 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:27.726 "subsystems": [ 00:32:27.726 { 00:32:27.726 "subsystem": "keyring", 00:32:27.726 "config": [ 00:32:27.726 { 00:32:27.726 "method": "keyring_file_add_key", 00:32:27.726 "params": { 00:32:27.726 "name": "key0", 00:32:27.726 "path": "/tmp/tmp.DTWfyc5IKJ" 00:32:27.726 } 00:32:27.726 }, 00:32:27.726 { 00:32:27.726 "method": "keyring_file_add_key", 00:32:27.726 "params": { 00:32:27.726 "name": "key1", 00:32:27.726 "path": "/tmp/tmp.awQCaji1ok" 00:32:27.726 } 00:32:27.726 } 00:32:27.726 ] 00:32:27.726 }, 00:32:27.726 { 00:32:27.726 "subsystem": "iobuf", 00:32:27.726 "config": [ 00:32:27.726 { 00:32:27.726 "method": "iobuf_set_options", 00:32:27.726 "params": { 00:32:27.726 "small_pool_count": 8192, 00:32:27.726 "large_pool_count": 1024, 00:32:27.726 "small_bufsize": 8192, 00:32:27.726 "large_bufsize": 135168 00:32:27.726 } 00:32:27.726 } 00:32:27.726 ] 00:32:27.726 }, 00:32:27.726 { 00:32:27.727 "subsystem": "sock", 00:32:27.727 "config": [ 00:32:27.727 { 00:32:27.727 "method": "sock_set_default_impl", 00:32:27.727 "params": { 00:32:27.727 "impl_name": "posix" 00:32:27.727 } 00:32:27.727 }, 00:32:27.727 { 00:32:27.727 "method": "sock_impl_set_options", 00:32:27.727 "params": { 00:32:27.727 "impl_name": "ssl", 00:32:27.727 "recv_buf_size": 4096, 00:32:27.727 "send_buf_size": 4096, 00:32:27.727 "enable_recv_pipe": true, 00:32:27.727 "enable_quickack": false, 00:32:27.727 "enable_placement_id": 0, 00:32:27.727 "enable_zerocopy_send_server": true, 00:32:27.727 "enable_zerocopy_send_client": false, 00:32:27.727 "zerocopy_threshold": 0, 00:32:27.727 "tls_version": 0, 00:32:27.727 "enable_ktls": false 00:32:27.727 } 00:32:27.727 }, 00:32:27.727 { 00:32:27.727 "method": "sock_impl_set_options", 00:32:27.727 "params": { 00:32:27.727 "impl_name": "posix", 00:32:27.727 "recv_buf_size": 2097152, 00:32:27.727 "send_buf_size": 2097152, 00:32:27.727 "enable_recv_pipe": true, 00:32:27.727 "enable_quickack": false, 00:32:27.727 "enable_placement_id": 0, 00:32:27.727 "enable_zerocopy_send_server": true, 00:32:27.727 "enable_zerocopy_send_client": false, 00:32:27.727 "zerocopy_threshold": 0, 00:32:27.727 "tls_version": 0, 00:32:27.727 "enable_ktls": false 00:32:27.727 } 00:32:27.727 } 00:32:27.727 ] 00:32:27.727 }, 00:32:27.727 { 00:32:27.727 "subsystem": "vmd", 00:32:27.727 "config": [] 00:32:27.727 }, 00:32:27.727 { 00:32:27.727 "subsystem": "accel", 00:32:27.727 "config": [ 00:32:27.727 { 00:32:27.727 "method": "accel_set_options", 00:32:27.727 "params": { 00:32:27.727 "small_cache_size": 128, 00:32:27.727 "large_cache_size": 16, 00:32:27.727 "task_count": 2048, 00:32:27.727 "sequence_count": 2048, 00:32:27.727 "buf_count": 2048 00:32:27.727 } 00:32:27.727 } 00:32:27.727 ] 00:32:27.727 }, 00:32:27.727 { 00:32:27.727 "subsystem": "bdev", 00:32:27.727 "config": [ 00:32:27.727 { 00:32:27.727 "method": "bdev_set_options", 00:32:27.727 "params": { 00:32:27.727 "bdev_io_pool_size": 65535, 00:32:27.727 "bdev_io_cache_size": 256, 00:32:27.727 "bdev_auto_examine": true, 00:32:27.727 "iobuf_small_cache_size": 128, 00:32:27.727 "iobuf_large_cache_size": 16 00:32:27.727 } 00:32:27.727 }, 00:32:27.727 { 00:32:27.727 "method": "bdev_raid_set_options", 00:32:27.727 "params": { 00:32:27.727 "process_window_size_kb": 1024 00:32:27.727 } 00:32:27.727 }, 00:32:27.727 { 00:32:27.727 "method": "bdev_iscsi_set_options", 00:32:27.727 "params": { 00:32:27.727 "timeout_sec": 30 00:32:27.727 } 00:32:27.727 }, 00:32:27.727 { 00:32:27.727 "method": "bdev_nvme_set_options", 00:32:27.727 "params": { 00:32:27.727 "action_on_timeout": "none", 00:32:27.727 "timeout_us": 0, 00:32:27.727 "timeout_admin_us": 0, 00:32:27.727 "keep_alive_timeout_ms": 10000, 00:32:27.727 "arbitration_burst": 0, 00:32:27.727 "low_priority_weight": 0, 00:32:27.727 "medium_priority_weight": 0, 00:32:27.727 "high_priority_weight": 0, 00:32:27.727 "nvme_adminq_poll_period_us": 10000, 00:32:27.727 "nvme_ioq_poll_period_us": 0, 00:32:27.727 "io_queue_requests": 512, 00:32:27.727 "delay_cmd_submit": true, 00:32:27.727 "transport_retry_count": 4, 00:32:27.727 "bdev_retry_count": 3, 00:32:27.727 "transport_ack_timeout": 0, 00:32:27.727 "ctrlr_loss_timeout_sec": 0, 00:32:27.727 "reconnect_delay_sec": 0, 00:32:27.727 "fast_io_fail_timeout_sec": 0, 00:32:27.727 "disable_auto_failback": false, 00:32:27.727 "generate_uuids": false, 00:32:27.727 "transport_tos": 0, 00:32:27.727 "nvme_error_stat": false, 00:32:27.727 "rdma_srq_size": 0, 00:32:27.727 "io_path_stat": false, 00:32:27.727 "allow_accel_sequence": false, 00:32:27.727 "rdma_max_cq_size": 0, 00:32:27.727 "rdma_cm_event_timeout_ms": 0, 00:32:27.727 "dhchap_digests": [ 00:32:27.727 "sha256", 00:32:27.727 "sha384", 00:32:27.727 "sha512" 00:32:27.727 ], 00:32:27.727 "dhchap_dhgroups": [ 00:32:27.727 "null", 00:32:27.727 "ffdhe2048", 00:32:27.727 "ffdhe3072", 00:32:27.727 "ffdhe4096", 00:32:27.727 "ffdhe6144", 00:32:27.727 "ffdhe8192" 00:32:27.727 ] 00:32:27.727 } 00:32:27.727 }, 00:32:27.727 { 00:32:27.727 "method": "bdev_nvme_attach_controller", 00:32:27.727 "params": { 00:32:27.727 "name": "nvme0", 00:32:27.727 "trtype": "TCP", 00:32:27.727 "adrfam": "IPv4", 00:32:27.727 "traddr": "127.0.0.1", 00:32:27.727 "trsvcid": "4420", 00:32:27.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:27.727 "prchk_reftag": false, 00:32:27.727 "prchk_guard": false, 00:32:27.727 "ctrlr_loss_timeout_sec": 0, 00:32:27.727 "reconnect_delay_sec": 0, 00:32:27.727 "fast_io_fail_timeout_sec": 0, 00:32:27.727 "psk": "key0", 00:32:27.727 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:27.727 "hdgst": false, 00:32:27.727 "ddgst": false 00:32:27.727 } 00:32:27.727 }, 00:32:27.727 { 00:32:27.727 "method": "bdev_nvme_set_hotplug", 00:32:27.727 "params": { 00:32:27.727 "period_us": 100000, 00:32:27.727 "enable": false 00:32:27.727 } 00:32:27.727 }, 00:32:27.727 { 00:32:27.727 "method": "bdev_wait_for_examine" 00:32:27.727 } 00:32:27.727 ] 00:32:27.727 }, 00:32:27.727 { 00:32:27.727 "subsystem": "nbd", 00:32:27.727 "config": [] 00:32:27.727 } 00:32:27.727 ] 00:32:27.727 }' 00:32:27.727 [2024-06-11 09:46:59.332451] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:32:27.727 [2024-06-11 09:46:59.332503] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379493 ] 00:32:27.727 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.727 [2024-06-11 09:46:59.391046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.727 [2024-06-11 09:46:59.453538] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.987 [2024-06-11 09:46:59.600583] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:28.559 09:47:00 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:28.559 09:47:00 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:32:28.559 09:47:00 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:28.559 09:47:00 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:28.559 09:47:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.819 09:47:00 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:28.819 09:47:00 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:28.819 09:47:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:28.819 09:47:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:28.819 09:47:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:28.819 09:47:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:28.819 09:47:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.819 09:47:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:28.819 09:47:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:28.819 09:47:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:28.819 09:47:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:28.819 09:47:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:28.819 09:47:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.819 09:47:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:29.079 09:47:00 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:29.079 09:47:00 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:29.079 09:47:00 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:29.079 09:47:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:29.339 09:47:01 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:29.339 09:47:01 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:29.339 09:47:01 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.DTWfyc5IKJ /tmp/tmp.awQCaji1ok 00:32:29.339 09:47:01 keyring_file -- keyring/file.sh@20 -- # killprocess 1379493 00:32:29.339 09:47:01 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1379493 ']' 00:32:29.339 09:47:01 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1379493 00:32:29.339 09:47:01 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:29.339 09:47:01 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:29.339 09:47:01 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1379493 00:32:29.339 09:47:01 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:29.339 09:47:01 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:29.339 09:47:01 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1379493' 00:32:29.339 killing process with pid 1379493 00:32:29.339 09:47:01 keyring_file -- common/autotest_common.sh@968 -- # kill 1379493 00:32:29.339 Received shutdown signal, test time was about 1.000000 seconds 00:32:29.339 00:32:29.339 Latency(us) 00:32:29.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.339 =================================================================================================================== 00:32:29.339 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:29.339 09:47:01 keyring_file -- common/autotest_common.sh@973 -- # wait 1379493 00:32:29.600 09:47:01 keyring_file -- keyring/file.sh@21 -- # killprocess 1377386 00:32:29.600 09:47:01 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1377386 ']' 00:32:29.600 09:47:01 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1377386 00:32:29.600 09:47:01 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:29.600 09:47:01 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:29.600 09:47:01 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1377386 00:32:29.600 09:47:01 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:29.600 09:47:01 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:29.600 09:47:01 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1377386' 00:32:29.600 killing process with pid 1377386 00:32:29.600 09:47:01 keyring_file -- common/autotest_common.sh@968 -- # kill 1377386 00:32:29.600 [2024-06-11 09:47:01.273192] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:29.600 09:47:01 keyring_file -- common/autotest_common.sh@973 -- # wait 1377386 00:32:29.861 00:32:29.861 real 0m12.517s 00:32:29.861 user 0m30.464s 00:32:29.861 sys 0m2.864s 00:32:29.861 09:47:01 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:29.861 09:47:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:29.861 ************************************ 00:32:29.861 END TEST keyring_file 00:32:29.861 ************************************ 00:32:29.861 09:47:01 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:29.861 09:47:01 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:29.861 09:47:01 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:29.861 09:47:01 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:29.861 09:47:01 -- common/autotest_common.sh@10 -- # set +x 00:32:29.861 ************************************ 00:32:29.861 START TEST keyring_linux 00:32:29.861 ************************************ 00:32:29.861 09:47:01 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:29.861 * Looking for test storage... 00:32:29.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:29.861 09:47:01 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:29.861 09:47:01 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.861 09:47:01 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:29.861 09:47:01 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.861 09:47:01 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.861 09:47:01 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.861 09:47:01 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.861 09:47:01 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.861 09:47:01 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.861 09:47:01 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.861 09:47:01 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.861 09:47:01 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.861 09:47:01 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.122 09:47:01 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.122 09:47:01 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.122 09:47:01 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.122 09:47:01 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.122 09:47:01 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.122 09:47:01 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.122 09:47:01 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:30.122 09:47:01 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:30.122 09:47:01 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:30.122 09:47:01 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:30.122 09:47:01 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:30.122 09:47:01 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:30.122 09:47:01 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:30.122 09:47:01 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:30.122 09:47:01 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:30.122 09:47:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:30.122 09:47:01 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:30.122 09:47:01 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:30.122 09:47:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:30.122 09:47:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:30.122 09:47:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:30.122 09:47:01 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:30.122 09:47:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:30.122 09:47:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:30.122 /tmp/:spdk-test:key0 00:32:30.122 09:47:01 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:30.122 09:47:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:30.122 09:47:01 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:30.122 09:47:01 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:30.123 09:47:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:30.123 09:47:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:30.123 09:47:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:30.123 09:47:01 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:30.123 09:47:01 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:30.123 09:47:01 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:30.123 09:47:01 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:30.123 09:47:01 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:30.123 09:47:01 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:30.123 09:47:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:30.123 09:47:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:30.123 /tmp/:spdk-test:key1 00:32:30.123 09:47:01 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1379947 00:32:30.123 09:47:01 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1379947 00:32:30.123 09:47:01 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:30.123 09:47:01 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 1379947 ']' 00:32:30.123 09:47:01 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.123 09:47:01 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:30.123 09:47:01 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.123 09:47:01 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:30.123 09:47:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:30.123 [2024-06-11 09:47:01.866446] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:32:30.123 [2024-06-11 09:47:01.866530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379947 ] 00:32:30.123 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.401 [2024-06-11 09:47:01.949636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.401 [2024-06-11 09:47:02.030511] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.007 09:47:02 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:31.007 09:47:02 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:32:31.007 09:47:02 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:31.007 09:47:02 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:31.007 09:47:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:31.007 [2024-06-11 09:47:02.730382] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.007 null0 00:32:31.007 [2024-06-11 09:47:02.762426] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:31.007 [2024-06-11 09:47:02.762805] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:31.007 09:47:02 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:31.007 09:47:02 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:31.007 823042307 00:32:31.007 09:47:02 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:31.007 509396980 00:32:31.007 09:47:02 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1380256 00:32:31.007 09:47:02 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1380256 /var/tmp/bperf.sock 00:32:31.007 09:47:02 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:31.007 09:47:02 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 1380256 ']' 00:32:31.007 09:47:02 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:31.007 09:47:02 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:31.007 09:47:02 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:31.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:31.007 09:47:02 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:31.007 09:47:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:31.273 [2024-06-11 09:47:02.847156] Starting SPDK v24.09-pre git sha1 b16523e5e / DPDK 24.03.0 initialization... 00:32:31.273 [2024-06-11 09:47:02.847210] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380256 ] 00:32:31.273 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.273 [2024-06-11 09:47:02.905408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.273 [2024-06-11 09:47:02.969250] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.273 09:47:02 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:31.273 09:47:02 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:32:31.273 09:47:02 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:31.273 09:47:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:31.533 09:47:03 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:31.533 09:47:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:31.792 09:47:03 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:31.792 09:47:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:32.053 [2024-06-11 09:47:03.655447] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:32.053 nvme0n1 00:32:32.053 09:47:03 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:32.053 09:47:03 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:32.053 09:47:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:32.053 09:47:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:32.053 09:47:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:32.053 09:47:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.313 09:47:03 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:32.313 09:47:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:32.313 09:47:03 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:32.313 09:47:03 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:32.313 09:47:03 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:32.313 09:47:03 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:32.313 09:47:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.573 09:47:04 keyring_linux -- keyring/linux.sh@25 -- # sn=823042307 00:32:32.573 09:47:04 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:32.573 09:47:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:32.573 09:47:04 keyring_linux -- keyring/linux.sh@26 -- # [[ 823042307 == \8\2\3\0\4\2\3\0\7 ]] 00:32:32.573 09:47:04 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 823042307 00:32:32.573 09:47:04 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:32.573 09:47:04 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:32.573 Running I/O for 1 seconds... 00:32:33.512 00:32:33.512 Latency(us) 00:32:33.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.512 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:33.512 nvme0n1 : 1.01 9785.92 38.23 0.00 0.00 13011.59 8028.16 21736.11 00:32:33.512 =================================================================================================================== 00:32:33.512 Total : 9785.92 38.23 0.00 0.00 13011.59 8028.16 21736.11 00:32:33.512 0 00:32:33.512 09:47:05 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:33.512 09:47:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:33.772 09:47:05 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:33.772 09:47:05 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:33.772 09:47:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:33.772 09:47:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:33.772 09:47:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:33.772 09:47:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.032 09:47:05 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:34.032 09:47:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:34.032 09:47:05 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:34.032 09:47:05 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:34.032 09:47:05 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:32:34.032 09:47:05 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:34.032 09:47:05 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:34.032 09:47:05 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:34.032 09:47:05 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:34.032 09:47:05 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:34.032 09:47:05 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:34.032 09:47:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:34.291 [2024-06-11 09:47:05.944771] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:34.291 [2024-06-11 09:47:05.944843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0d3b0 (107): Transport endpoint is not connected 00:32:34.291 [2024-06-11 09:47:05.945835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0d3b0 (9): Bad file descriptor 00:32:34.291 [2024-06-11 09:47:05.946836] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:34.291 [2024-06-11 09:47:05.946845] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:34.291 [2024-06-11 09:47:05.946852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:34.291 request: 00:32:34.291 { 00:32:34.291 "name": "nvme0", 00:32:34.291 "trtype": "tcp", 00:32:34.291 "traddr": "127.0.0.1", 00:32:34.291 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:34.291 "adrfam": "ipv4", 00:32:34.291 "trsvcid": "4420", 00:32:34.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:34.291 "psk": ":spdk-test:key1", 00:32:34.291 "method": "bdev_nvme_attach_controller", 00:32:34.291 "req_id": 1 00:32:34.291 } 00:32:34.291 Got JSON-RPC error response 00:32:34.291 response: 00:32:34.291 { 00:32:34.291 "code": -5, 00:32:34.291 "message": "Input/output error" 00:32:34.291 } 00:32:34.291 09:47:05 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:32:34.291 09:47:05 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:34.291 09:47:05 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:34.291 09:47:05 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@33 -- # sn=823042307 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 823042307 00:32:34.292 1 links removed 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@33 -- # sn=509396980 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 509396980 00:32:34.292 1 links removed 00:32:34.292 09:47:05 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1380256 00:32:34.292 09:47:05 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 1380256 ']' 00:32:34.292 09:47:05 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 1380256 00:32:34.292 09:47:05 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:32:34.292 09:47:05 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:34.292 09:47:05 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1380256 00:32:34.292 09:47:06 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:34.292 09:47:06 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:34.292 09:47:06 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1380256' 00:32:34.292 killing process with pid 1380256 00:32:34.292 09:47:06 keyring_linux -- common/autotest_common.sh@968 -- # kill 1380256 00:32:34.292 Received shutdown signal, test time was about 1.000000 seconds 00:32:34.292 00:32:34.292 Latency(us) 00:32:34.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:34.292 =================================================================================================================== 00:32:34.292 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:34.292 09:47:06 keyring_linux -- common/autotest_common.sh@973 -- # wait 1380256 00:32:34.552 09:47:06 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1379947 00:32:34.552 09:47:06 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 1379947 ']' 00:32:34.552 09:47:06 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 1379947 00:32:34.552 09:47:06 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:32:34.552 09:47:06 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:34.552 09:47:06 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1379947 00:32:34.552 09:47:06 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:34.552 09:47:06 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:34.552 09:47:06 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1379947' 00:32:34.552 killing process with pid 1379947 00:32:34.552 09:47:06 keyring_linux -- common/autotest_common.sh@968 -- # kill 1379947 00:32:34.552 09:47:06 keyring_linux -- common/autotest_common.sh@973 -- # wait 1379947 00:32:34.812 00:32:34.812 real 0m4.870s 00:32:34.812 user 0m8.716s 00:32:34.812 sys 0m1.407s 00:32:34.812 09:47:06 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:34.812 09:47:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:34.812 ************************************ 00:32:34.812 END TEST keyring_linux 00:32:34.812 ************************************ 00:32:34.812 09:47:06 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:34.812 09:47:06 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:34.812 09:47:06 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:34.812 09:47:06 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:34.812 09:47:06 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:34.812 09:47:06 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:34.812 09:47:06 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:34.812 09:47:06 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:34.812 09:47:06 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:34.812 09:47:06 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:34.812 09:47:06 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:34.812 09:47:06 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:34.812 09:47:06 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:34.812 09:47:06 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:34.812 09:47:06 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:34.812 09:47:06 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:34.812 09:47:06 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:34.812 09:47:06 -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:34.812 09:47:06 -- common/autotest_common.sh@10 -- # set +x 00:32:34.812 09:47:06 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:34.812 09:47:06 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:32:34.812 09:47:06 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:32:34.812 09:47:06 -- common/autotest_common.sh@10 -- # set +x 00:32:42.950 INFO: APP EXITING 00:32:42.950 INFO: killing all VMs 00:32:42.950 INFO: killing vhost app 00:32:42.950 WARN: no vhost pid file found 00:32:42.950 INFO: EXIT DONE 00:32:45.496 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:45.756 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:45.756 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:45.756 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:45.756 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:45.756 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:45.756 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:45.756 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:45.756 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:45.756 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:45.756 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:45.756 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:46.018 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:46.018 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:46.018 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:46.018 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:46.018 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:49.321 Cleaning 00:32:49.321 Removing: /var/run/dpdk/spdk0/config 00:32:49.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:49.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:49.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:49.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:49.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:49.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:49.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:49.321 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:49.321 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:49.321 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:49.321 Removing: /var/run/dpdk/spdk1/config 00:32:49.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:49.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:49.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:49.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:49.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:49.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:49.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:49.321 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:49.321 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:49.321 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:49.321 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:49.321 Removing: /var/run/dpdk/spdk2/config 00:32:49.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:49.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:49.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:49.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:49.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:49.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:49.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:49.321 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:49.321 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:49.321 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:49.321 Removing: /var/run/dpdk/spdk3/config 00:32:49.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:49.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:49.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:49.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:49.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:49.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:49.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:49.321 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:49.321 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:49.321 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:49.582 Removing: /var/run/dpdk/spdk4/config 00:32:49.582 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:49.582 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:49.582 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:49.582 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:49.582 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:49.582 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:49.582 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:49.582 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:49.582 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:49.582 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:49.582 Removing: /dev/shm/bdev_svc_trace.1 00:32:49.582 Removing: /dev/shm/nvmf_trace.0 00:32:49.582 Removing: /dev/shm/spdk_tgt_trace.pid888096 00:32:49.582 Removing: /var/run/dpdk/spdk0 00:32:49.582 Removing: /var/run/dpdk/spdk1 00:32:49.582 Removing: /var/run/dpdk/spdk2 00:32:49.582 Removing: /var/run/dpdk/spdk3 00:32:49.582 Removing: /var/run/dpdk/spdk4 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1000069 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1014094 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1014096 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1015102 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1016129 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1017325 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1017987 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1018131 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1018372 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1018559 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1018567 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1020034 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1021047 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1022106 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1022742 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1022870 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1023143 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1024979 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1026881 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1040459 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1041056 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1048600 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1059680 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1065540 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1080141 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1092640 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1095083 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1096432 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1121203 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1126846 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1169042 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1174507 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1176434 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1178618 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1178780 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1178796 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1178995 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1179513 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1181597 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1182593 00:32:49.582 Removing: /var/run/dpdk/spdk_pid1183039 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1185691 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1186386 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1187119 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1192602 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1204612 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1209440 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1216613 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1218009 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1219658 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1224978 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1229772 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1238789 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1238833 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1243878 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1244213 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1244378 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1244892 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1244906 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1250907 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1251673 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1256845 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1260008 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1266574 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1273332 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1283676 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1292000 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1292002 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1314621 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1315194 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1315774 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1316372 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1317428 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1318106 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1318747 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1319324 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1324177 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1324520 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1331570 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1331936 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1334450 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1341727 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1341785 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1347582 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1350040 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1352754 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1354207 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1356504 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1357926 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1367851 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1368384 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1368981 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1371793 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1372316 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1372813 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1377386 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1377682 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1379493 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1379947 00:32:49.843 Removing: /var/run/dpdk/spdk_pid1380256 00:32:49.843 Removing: /var/run/dpdk/spdk_pid886556 00:32:49.843 Removing: /var/run/dpdk/spdk_pid888096 00:32:49.843 Removing: /var/run/dpdk/spdk_pid888932 00:32:50.104 Removing: /var/run/dpdk/spdk_pid889978 00:32:50.104 Removing: /var/run/dpdk/spdk_pid890316 00:32:50.104 Removing: /var/run/dpdk/spdk_pid891381 00:32:50.104 Removing: /var/run/dpdk/spdk_pid891577 00:32:50.104 Removing: /var/run/dpdk/spdk_pid891836 00:32:50.104 Removing: /var/run/dpdk/spdk_pid892972 00:32:50.104 Removing: /var/run/dpdk/spdk_pid893606 00:32:50.104 Removing: /var/run/dpdk/spdk_pid893950 00:32:50.104 Removing: /var/run/dpdk/spdk_pid894270 00:32:50.104 Removing: /var/run/dpdk/spdk_pid894647 00:32:50.104 Removing: /var/run/dpdk/spdk_pid894999 00:32:50.104 Removing: /var/run/dpdk/spdk_pid895356 00:32:50.104 Removing: /var/run/dpdk/spdk_pid895706 00:32:50.104 Removing: /var/run/dpdk/spdk_pid896051 00:32:50.104 Removing: /var/run/dpdk/spdk_pid897148 00:32:50.104 Removing: /var/run/dpdk/spdk_pid900632 00:32:50.104 Removing: /var/run/dpdk/spdk_pid900973 00:32:50.104 Removing: /var/run/dpdk/spdk_pid901208 00:32:50.104 Removing: /var/run/dpdk/spdk_pid901476 00:32:50.104 Removing: /var/run/dpdk/spdk_pid901850 00:32:50.104 Removing: /var/run/dpdk/spdk_pid902179 00:32:50.104 Removing: /var/run/dpdk/spdk_pid902557 00:32:50.104 Removing: /var/run/dpdk/spdk_pid902725 00:32:50.104 Removing: /var/run/dpdk/spdk_pid903008 00:32:50.104 Removing: /var/run/dpdk/spdk_pid903266 00:32:50.104 Removing: /var/run/dpdk/spdk_pid903504 00:32:50.104 Removing: /var/run/dpdk/spdk_pid903644 00:32:50.104 Removing: /var/run/dpdk/spdk_pid904148 00:32:50.104 Removing: /var/run/dpdk/spdk_pid904435 00:32:50.104 Removing: /var/run/dpdk/spdk_pid904831 00:32:50.104 Removing: /var/run/dpdk/spdk_pid905199 00:32:50.104 Removing: /var/run/dpdk/spdk_pid905221 00:32:50.104 Removing: /var/run/dpdk/spdk_pid905385 00:32:50.104 Removing: /var/run/dpdk/spdk_pid905640 00:32:50.104 Removing: /var/run/dpdk/spdk_pid905995 00:32:50.104 Removing: /var/run/dpdk/spdk_pid906345 00:32:50.104 Removing: /var/run/dpdk/spdk_pid906700 00:32:50.104 Removing: /var/run/dpdk/spdk_pid906927 00:32:50.104 Removing: /var/run/dpdk/spdk_pid907112 00:32:50.104 Removing: /var/run/dpdk/spdk_pid907439 00:32:50.104 Removing: /var/run/dpdk/spdk_pid907849 00:32:50.104 Removing: /var/run/dpdk/spdk_pid908244 00:32:50.104 Removing: /var/run/dpdk/spdk_pid908590 00:32:50.104 Removing: /var/run/dpdk/spdk_pid908790 00:32:50.104 Removing: /var/run/dpdk/spdk_pid908995 00:32:50.104 Removing: /var/run/dpdk/spdk_pid909338 00:32:50.104 Removing: /var/run/dpdk/spdk_pid909687 00:32:50.104 Removing: /var/run/dpdk/spdk_pid910358 00:32:50.104 Removing: /var/run/dpdk/spdk_pid910762 00:32:50.104 Removing: /var/run/dpdk/spdk_pid910980 00:32:50.104 Removing: /var/run/dpdk/spdk_pid911251 00:32:50.104 Removing: /var/run/dpdk/spdk_pid911601 00:32:50.104 Removing: /var/run/dpdk/spdk_pid911949 00:32:50.104 Removing: /var/run/dpdk/spdk_pid912025 00:32:50.104 Removing: /var/run/dpdk/spdk_pid912435 00:32:50.104 Removing: /var/run/dpdk/spdk_pid916886 00:32:50.104 Removing: /var/run/dpdk/spdk_pid971021 00:32:50.104 Removing: /var/run/dpdk/spdk_pid976358 00:32:50.104 Removing: /var/run/dpdk/spdk_pid988015 00:32:50.104 Removing: /var/run/dpdk/spdk_pid994385 00:32:50.104 Removing: /var/run/dpdk/spdk_pid999344 00:32:50.104 Clean 00:32:50.365 09:47:21 -- common/autotest_common.sh@1450 -- # return 0 00:32:50.365 09:47:21 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:50.365 09:47:21 -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:50.365 09:47:21 -- common/autotest_common.sh@10 -- # set +x 00:32:50.365 09:47:22 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:50.365 09:47:22 -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:50.365 09:47:22 -- common/autotest_common.sh@10 -- # set +x 00:32:50.365 09:47:22 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:50.365 09:47:22 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:50.365 09:47:22 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:50.365 09:47:22 -- spdk/autotest.sh@391 -- # hash lcov 00:32:50.365 09:47:22 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:50.365 09:47:22 -- spdk/autotest.sh@393 -- # hostname 00:32:50.365 09:47:22 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:50.625 geninfo: WARNING: invalid characters removed from testname! 00:33:17.282 09:47:46 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:18.223 09:47:49 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:20.766 09:47:52 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:22.675 09:47:54 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:25.216 09:47:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:27.128 09:47:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:29.674 09:48:01 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:29.674 09:48:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.674 09:48:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:29.674 09:48:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.674 09:48:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.674 09:48:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.674 09:48:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.674 09:48:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.674 09:48:01 -- paths/export.sh@5 -- $ export PATH 00:33:29.674 09:48:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.674 09:48:01 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:29.674 09:48:01 -- common/autobuild_common.sh@437 -- $ date +%s 00:33:29.674 09:48:01 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718092081.XXXXXX 00:33:29.674 09:48:01 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718092081.wbwUln 00:33:29.674 09:48:01 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:33:29.674 09:48:01 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:33:29.674 09:48:01 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:29.674 09:48:01 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:29.674 09:48:01 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:29.674 09:48:01 -- common/autobuild_common.sh@453 -- $ get_config_params 00:33:29.674 09:48:01 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:29.674 09:48:01 -- common/autotest_common.sh@10 -- $ set +x 00:33:29.674 09:48:01 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:29.674 09:48:01 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:33:29.674 09:48:01 -- pm/common@17 -- $ local monitor 00:33:29.674 09:48:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:29.674 09:48:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:29.674 09:48:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:29.674 09:48:01 -- pm/common@21 -- $ date +%s 00:33:29.674 09:48:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:29.674 09:48:01 -- pm/common@25 -- $ sleep 1 00:33:29.674 09:48:01 -- pm/common@21 -- $ date +%s 00:33:29.674 09:48:01 -- pm/common@21 -- $ date +%s 00:33:29.674 09:48:01 -- pm/common@21 -- $ date +%s 00:33:29.674 09:48:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718092081 00:33:29.674 09:48:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718092081 00:33:29.674 09:48:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718092081 00:33:29.674 09:48:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718092081 00:33:29.674 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718092081_collect-vmstat.pm.log 00:33:29.674 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718092081_collect-cpu-load.pm.log 00:33:29.674 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718092081_collect-cpu-temp.pm.log 00:33:29.674 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718092081_collect-bmc-pm.bmc.pm.log 00:33:30.617 09:48:02 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:33:30.617 09:48:02 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:30.617 09:48:02 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:30.617 09:48:02 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:30.617 09:48:02 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:30.617 09:48:02 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:30.617 09:48:02 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:30.617 09:48:02 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:30.617 09:48:02 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:30.617 09:48:02 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:30.617 09:48:02 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:30.617 09:48:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:30.617 09:48:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:30.617 09:48:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:30.617 09:48:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:30.617 09:48:02 -- pm/common@44 -- $ pid=1392716 00:33:30.617 09:48:02 -- pm/common@50 -- $ kill -TERM 1392716 00:33:30.617 09:48:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:30.617 09:48:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:30.617 09:48:02 -- pm/common@44 -- $ pid=1392717 00:33:30.617 09:48:02 -- pm/common@50 -- $ kill -TERM 1392717 00:33:30.617 09:48:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:30.617 09:48:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:30.617 09:48:02 -- pm/common@44 -- $ pid=1392719 00:33:30.617 09:48:02 -- pm/common@50 -- $ kill -TERM 1392719 00:33:30.617 09:48:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:30.617 09:48:02 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:30.617 09:48:02 -- pm/common@44 -- $ pid=1392746 00:33:30.617 09:48:02 -- pm/common@50 -- $ sudo -E kill -TERM 1392746 00:33:30.617 + [[ -n 766059 ]] 00:33:30.617 + sudo kill 766059 00:33:30.888 [Pipeline] } 00:33:30.907 [Pipeline] // stage 00:33:30.912 [Pipeline] } 00:33:30.931 [Pipeline] // timeout 00:33:30.937 [Pipeline] } 00:33:30.954 [Pipeline] // catchError 00:33:30.959 [Pipeline] } 00:33:30.976 [Pipeline] // wrap 00:33:30.983 [Pipeline] } 00:33:30.998 [Pipeline] // catchError 00:33:31.007 [Pipeline] stage 00:33:31.009 [Pipeline] { (Epilogue) 00:33:31.025 [Pipeline] catchError 00:33:31.027 [Pipeline] { 00:33:31.042 [Pipeline] echo 00:33:31.044 Cleanup processes 00:33:31.050 [Pipeline] sh 00:33:31.337 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:31.337 1392859 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:31.337 1393300 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:31.350 [Pipeline] sh 00:33:31.636 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:31.636 ++ grep -v 'sudo pgrep' 00:33:31.636 ++ awk '{print $1}' 00:33:31.636 + sudo kill -9 1392859 00:33:31.649 [Pipeline] sh 00:33:31.935 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:46.924 [Pipeline] sh 00:33:47.210 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:47.210 Artifacts sizes are good 00:33:47.226 [Pipeline] archiveArtifacts 00:33:47.233 Archiving artifacts 00:33:47.425 [Pipeline] sh 00:33:47.709 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:47.723 [Pipeline] cleanWs 00:33:47.733 [WS-CLEANUP] Deleting project workspace... 00:33:47.733 [WS-CLEANUP] Deferred wipeout is used... 00:33:47.740 [WS-CLEANUP] done 00:33:47.742 [Pipeline] } 00:33:47.760 [Pipeline] // catchError 00:33:47.772 [Pipeline] sh 00:33:48.057 + logger -p user.info -t JENKINS-CI 00:33:48.067 [Pipeline] } 00:33:48.083 [Pipeline] // stage 00:33:48.086 [Pipeline] } 00:33:48.102 [Pipeline] // node 00:33:48.107 [Pipeline] End of Pipeline 00:33:48.136 Finished: SUCCESS